Join us on YugabyteDB Community Slack
Star us on
Get Started
Slack
GitHub
Get Started
v2.13 (preview) v2.12 (stable) v2.8 (earlier version) v2.6 (earlier version) v2.4 (earlier version) Unsupported versions
  • YUGABYTEDB CORE
    • Quick start
      • 1. Install YugabyteDB
      • 2. Create a local cluster
      • 3. Explore distributed SQL
      • 4. Build an application
        • Java
        • Node.js
        • Go
        • Python
        • Ruby
        • C#
        • PHP
        • C++
        • C
        • Scala
        • Rust
    • Explore
      • SQL features
        • SQL Feature Support
        • PostgreSQL Compatibility
        • Foreign Data Wrappers
        • Schemas and Tables
        • Data Types
        • Data Manipulation
        • Queries and Joins
        • Expressions and Operators
        • Stored Procedures
        • Triggers
        • Advanced features
          • Cursors
          • Table Partitioning
          • Views
          • Savepoints
          • Collations
        • Going beyond SQL
          • Follower reads
          • Tablespaces
        • PostgreSQL extensions
      • Fault tolerance
      • Horizontal scalability
        • Scaling Transactions
        • Sharding Data
      • Transactions
        • Distributed Transactions
        • Isolation Levels
        • Explicit Locking
      • Indexes and Constraints
        • Primary keys
        • Foreign keys
        • Secondary indexes
        • Unique indexes
        • Partial indexes
        • Expression indexes
        • Covering indexes
        • GIN indexes
        • Other constraints
      • JSON support
      • Multi-region deployments
        • Sync replication (3+ regions)
        • Async Replication (2+ regions)
        • Row-Level Geo-Partitioning
        • Read replicas
      • Query tuning
        • Introduction
        • Get query statistics using pg_stat_statements
        • Viewing live queries with pg_stat_activity
        • Analyzing queries with EXPLAIN
        • Optimizing YSQL queries using pg_hint_plan
      • Cluster management
        • Point-in-time recovery
      • Change data capture (CDC)
        • Debezium connector
        • Java CDC console
      • Security
      • Observability
        • Prometheus Integration
        • Grafana Dashboard
    • Drivers and ORMs
      • Java
        • JDBC drivers
        • Hibernate ORM
        • Supported versions
      • Go
        • Go drivers
        • Go ORMs
        • Supported versions
      • C#
        • C# drivers
        • C# ORMs
        • Supported versions
      • NodeJS
        • NodeJS drivers
        • NodeJS ORMs
        • Supported Versions
      • Python
        • Python drivers
        • Python ORMs
        • Supported versions
    • Develop
      • Learn app development
        • 1. SQL vs NoSQL
        • 2. Data modeling
        • 3. Data types
        • 4. ACID transactions
        • 5. Aggregations
        • 6. Batch operations
        • 7. Date and time
        • 8. Strings and text
        • 9. TTL for data expiration
      • Real-world examples
        • E-Commerce app
        • IoT fleet management
      • Explore sample apps
      • Best practices
      • Cloud-native development
        • Codespaces
        • Gitpod
    • Migrate
      • Migration process overview
      • Migrate from PostgreSQL
        • Convert a PostgreSQL schema
        • Migrate a PostgreSQL application
        • Export PostgreSQL data
        • Prepare a cluster
        • Import PostgreSQL data
        • Verify Migration
    • Deploy
      • Deployment checklist
      • Manual deployment
        • 1. System configuration
        • 2. Install software
        • 3. Start YB-Masters
        • 4. Start YB-TServers
        • 5. Verify deployment
      • Kubernetes
        • Single-zone
          • Open Source
          • Amazon EKS
          • Google Kubernetes Engine
          • Azure Kubernetes Service
        • Multi-zone
          • Amazon EKS
          • Google Kubernetes Engine
        • Multi-cluster
          • Google Kubernetes Engine
        • Best practices
        • Connect Clients
      • Docker
      • Public clouds
        • Amazon Web Services
        • Google Cloud Platform
        • Microsoft Azure
      • Multi-DC deployments
        • Three+ data center (3DC)
        • Asynchronous Replication
        • Read replica clusters
    • Benchmark
      • TPC-C
      • sysbench
      • YCSB
      • Key-value workload
      • Large datasets
      • Scalability
        • Scaling queries
      • Resilience
        • Jepsen testing
      • Performance Troubleshooting
    • Secure
      • Security checklist
      • Enable Authentication
        • Enable User Authentication
        • Configure ysql_hba_conf_csv
      • Authentication Methods
        • Password Authentication
        • LDAP Authentication
        • Host-Based Authentication
        • Trust Authentication
      • Role-Based Access Control
        • Overview
        • Manage Users and Roles
        • Grant Privileges
        • Row-Level Security (RLS)
        • Column-Level Security
      • Encryption in Transit
        • Create server certificates
        • Enable server-to-server encryption
        • Enable client-to-server encryption
        • Connect to Clusters
        • TLS and authentication
      • Encryption at rest
      • Column-level encryption
      • Audit Logging
        • Configure Audit Logging
        • Session-Level Audit Logging
        • Object-Level Audit Logging
      • Vulnerability disclosure policy
    • Manage
      • Back up and restore
        • Back up data
        • Restore data
        • Point-in-time recovery
        • Snapshot and restore data
      • Migrate data
        • Bulk import
        • Bulk export
      • Change cluster configuration
      • Diagnostics reporting
      • Upgrade a deployment
      • Grow cluster
    • Troubleshoot
      • Troubleshooting
      • Cluster level issues
        • YCQL connection issues
        • YEDIS connection Issues
        • Recover tserver/master
        • Replace a failed YB-TServer
        • Replace a failed YB-Master
        • Manual remote bootstrap when a majority of peers fail
      • Node level issues
        • Check servers
        • Inspect logs
        • System statistics
        • Disk failure
        • Common error messages
    • Contribute
      • Core database
        • Contribution checklist
        • Build the source
        • Configure a CLion project
        • Run the tests
        • Coding style
  • YUGABYTEDB ANYWHERE
    • Overview
      • Install
      • Configure
    • Install YugabyteDB Anywhere
      • Prerequisites
      • Prepare the environment
      • Install software
      • Prepare nodes (on-premises)
      • Uninstall software
    • Configure YugabyteDB Anywhere
      • Create admin user
      • Configure the cloud provider
      • Configure the backup target
      • Configure alerts
    • Create deployments
      • Multi-zone universe
      • Multi-region universe
      • Multi-cloud universe
      • Read replica cluster
      • Asynchronous replication
    • Manage deployments
      • Start and stop processes
      • Eliminate an unresponsive node
      • Recover a node
      • Enable high availability
      • Edit configuration flags
      • Edit a universe
      • Delete a universe
      • Configure instance tags
      • Upgrade YugabyteDB software
      • Migrate to Helm 3
    • Back up universes
      • Configure backup storage
      • Back up universe data
      • Restore universe data
      • Schedule data backups
    • Security
      • Security checklist
      • Customize ports
      • LDAP authentication
      • Authorization
      • Create a KMS configuration
      • Enable encryption at rest
      • Enable encryption in transit (TLS)
      • Network security
    • Alerts and monitoring
      • Alerts
      • Live Queries dashboard
      • Slow Queries dashboard
    • Troubleshoot
      • Install and upgrade issues
      • Universe issues
    • Administer YugabyteDB Anywhere
      • Back up YugabyteDB Anywhere
      • Authenticate with LDAP
    • Upgrade YugabyteDB Anywhere
      • Upgrade using Replicated
  • YUGABYTEDB MANAGED
    • Overview
    • Quick start
      • Create a free cluster
      • Connect to the cluster
      • Explore distributed SQL
      • Build an application
        • Before you begin
        • Java
        • Go
        • Python
        • Node.js
        • C
        • C++
        • C#
        • Ruby
        • Rust
        • PHP
    • Deploy clusters
      • Planning a cluster
      • Create a free cluster
      • Create a standard cluster
      • VPC network
        • Overview
        • Set up a VPC network
        • VPCs
        • Peering Connections
    • Secure clusters
      • IP allow lists
      • Database authorization
      • Add database users
      • Encryption in transit
      • Audit account activity
    • Connect to clusters
      • Cloud Shell
      • Client shell
      • Connect applications
    • Alerts and monitoring
      • Alerts
      • Performance metrics
      • Live queries
      • Slow YSQL queries
      • Cluster activity
    • Manage clusters
      • Scale and configure clusters
      • Backup and restore
      • Maintenance windows
      • Create extensions
    • Administration and billing
      • Manage account access
      • Manage billing
      • Cluster costs
    • Example applications
      • Connect a Spring application
      • Connect a YCQL Java application
      • Hasura Cloud
      • Deploy a GraphQL application
    • Security architecture
      • Security architecture
      • Shared responsibility model
    • Troubleshoot
    • YugabyteDB Managed FAQ
    • What's new
  • INTEGRATIONS
    • Apache Kafka
    • Apache Spark
    • Debezium
    • Django REST framework
    • Entity Framework
    • Flyway
    • GORM
    • Hasura
      • Application Development
      • Benchmarking
    • JanusGraph
    • KairosDB
    • Liquibase
    • Metabase
    • Presto
    • Prisma
    • Sequelize
    • Spring Framework
      • Spring Data YugabyteDB
      • Spring Data JPA
      • Spring Data Cassandra
    • SQLAlchemy
    • WSO2 Identity Server
    • YSQL Loader
    • YugabyteDB JDBC driver
  • REFERENCE
    • Architecture
      • Design goals
      • Key concepts
        • Universe
        • YB-TServer Service
        • YB-Master Service
      • Core functions
        • Universe creation
        • Table creation
        • Write IO path
        • Read IO path
        • High availability
      • Layered architecture
      • Query layer
        • Overview
      • DocDB transactions layer
        • Transactions overview
        • Transaction isolation levels
        • Explicit locking
        • Read Committed
        • Single-row transactions
        • Distributed transactions
        • Transactional IO path
      • DocDB sharding layer
        • Hash & range sharding
        • Tablet splitting
        • Colocated tables
      • DocDB replication layer
        • Replication
        • xCluster replication
        • Read replicas
        • Change data capture (CDC)
      • DocDB storage layer
        • Persistence
        • Performance
    • APIs
      • YSQL
        • The SQL language
          • SQL statements
            • ABORT
            • ALTER DATABASE
            • ALTER DEFAULT PRIVILEGES
            • ALTER DOMAIN
            • ALTER FOREIGN DATA WRAPPER
            • ALTER FOREIGN TABLE
            • ALTER GROUP
            • ALTER POLICY
            • ALTER ROLE
            • ALTER SEQUENCE
            • ALTER SERVER
            • ALTER TABLE
            • ALTER USER
            • ANALYZE
            • BEGIN
            • CALL
            • COMMENT
            • COMMIT
            • COPY
            • CREATE AGGREGATE
            • CREATE CAST
            • CREATE DATABASE
            • CREATE DOMAIN
            • CREATE EXTENSION
            • CREATE FOREIGN DATA WRAPPER
            • CREATE FOREIGN TABLE
            • CREATE FUNCTION
            • CREATE GROUP
            • CREATE INDEX
            • CREATE MATERIALIZED VIEW
            • CREATE OPERATOR
            • CREATE OPERATOR CLASS
            • CREATE POLICY
            • CREATE PROCEDURE
            • CREATE ROLE
            • CREATE RULE
            • CREATE SCHEMA
            • CREATE SEQUENCE
            • CREATE SERVER
            • CREATE TABLE
            • CREATE TABLE AS
            • CREATE TRIGGER
            • CREATE TYPE
            • CREATE USER
            • CREATE USER MAPPING
            • CREATE VIEW
            • DEALLOCATE
            • DELETE
            • DO
            • DROP AGGREGATE
            • DROP CAST
            • DROP DATABASE
            • DROP DOMAIN
            • DROP EXTENSION
            • DROP FOREIGN DATA WRAPPER
            • DROP FOREIGN TABLE
            • DROP FUNCTION
            • DROP GROUP
            • DROP MATERIALIZED VIEW
            • DROP OPERATOR
            • DROP OPERATOR CLASS
            • DROP OWNED
            • DROP POLICY
            • DROP PROCEDURE
            • DROP ROLE
            • DROP RULE
            • DROP SEQUENCE
            • DROP SERVER
            • DROP TABLE
            • DROP TRIGGER
            • DROP TYPE
            • DROP USER
            • END
            • EXECUTE
            • EXPLAIN
            • GRANT
            • IMPORT FOREIGN SCHEMA
            • INSERT
            • LOCK
            • PREPARE
            • REASSIGN OWNED
            • REFRESH MATERIALIZED VIEW
            • RELEASE SAVEPOINT
            • RESET
            • REVOKE
            • ROLLBACK
            • ROLLBACK TO SAVEPOINT
            • SAVEPOINT
            • SELECT
            • SET
            • SET CONSTRAINTS
            • SET ROLE
            • SET SESSION AUTHORIZATION
            • SET TRANSACTION
            • SHOW
            • SHOW TRANSACTION
            • TRUNCATE
            • UPDATE
            • VALUES
          • WITH clause
            • WITH clause—SQL syntax and semantics
            • recursive CTE
            • case study—traversing an employee hierarchy
            • traversing general graphs
              • graph representation
              • common code
              • undirected cyclic graph
              • directed cyclic graph
              • directed acyclic graph
              • rooted tree
              • Unique containing paths
              • Stress testing find_paths()
            • case study—Bacon Numbers from IMDb
              • Bacon numbers for synthetic data
              • Bacon numbers for IMDb data
        • Data types
          • Array
            • array[] constructor
            • Literals
              • Text typecasting and literals
              • Array of primitive values
              • Row
              • Array of rows
            • FOREACH loop (PL/pgSQL)
            • array of DOMAINs
            • Functions and operators
              • ANY and ALL
              • Array comparison
              • Array slice operator
              • Array concatenation
              • Array properties
              • array_agg(), unnest(), generate_subscripts()
              • array_fill()
              • array_position(), array_positions()
              • array_remove()
              • array_replace() / set value
              • array_to_string()
              • string_to_array()
          • Binary
          • Boolean
          • Character
          • Date and time
            • Conceptual background
            • Timezones and UTC offsets
              • Catalog views
              • Extended_timezone_names
                • Unrestricted full projection
                • Real timezones with DST
                • Real timezones no DST
                • Synthetic timezones no DST
              • Offset/timezone-sensitive operations
                • Timestamptz to/from timestamp conversion
                • Pure 'day' interval arithmetic
              • Four ways to specify offset
                • Name-resolution rules
                  • 1 case-insensitive resolution
                  • 2 ~names.abbrev never searched
                  • 3 'set timezone' string not resolved in ~abbrevs.abbrev
                  • 4 ~abbrevs.abbrev before ~names.name
                  • Helper functions
              • Syntax contexts for offset
              • Recommended practice
            • Typecasting between date-time and text-values
            • Semantics of the date-time data types
              • Date data type
              • Time data type
              • Plain timestamp and timestamptz
              • Interval data type
                • Interval representation
                  • Ad hoc examples
                  • Representation model
                • Interval value limits
                • Declaring intervals
                • Justify() and extract(epoch...)
                • Interval arithmetic
                  • Interval-interval comparison
                  • Interval-interval addition and subtraction
                  • Interval-number multiplication
                  • Moment-moment overloads of "-"
                  • Moment-interval overloads of "+" and "-"
                • Custom interval domains
                • Interval utility functions
            • Typecasting between date-time datatypes
            • Operators
              • Test comparison overloads
              • Test addition overloads
              • Test subtraction overloads
              • Test multiplication overloads
              • Test division overloads
            • General-purpose functions
              • Creating date-time values
              • Manipulating date-time values
              • Current date-time moment
              • Delaying execution
              • Miscellaneous
                • Function age()
                • Function extract() | date_part()
                • Implementations that model the overlaps operator
            • Formatting functions
            • Case study—SQL stopwatch
            • Download & install the date-time utilities
            • ToC
          • JSON
            • JSON literals
            • Primitive and compound data types
            • Code example conventions
            • Indexes and check constraints
            • Functions & operators
              • ::jsonb, ::json, ::text (typecast)
              • ->, ->>, #>, #>> (JSON subvalues)
              • - and #- (remove)
              • || (concatenation)
              • = (equality)
              • @> and <@ (containment)
              • ? and ?| and ?& (key or value existence)
              • array_to_json()
              • jsonb_agg()
              • jsonb_array_elements()
              • jsonb_array_elements_text()
              • jsonb_array_length()
              • jsonb_build_object()
              • jsonb_build_array()
              • jsonb_each()
              • jsonb_each_text()
              • jsonb_extract_path()
              • jsonb_extract_path_text() and json_extract_path_text()
              • jsonb_object()
              • jsonb_object_agg()
              • jsonb_object_keys()
              • jsonb_populate_record()
              • jsonb_populate_recordset()
              • jsonb_pretty()
              • jsonb_set() and jsonb_insert()
              • jsonb_strip_nulls()
              • jsonb_to_record()
              • jsonb_to_recordset()
              • jsonb_typeof()
              • row_to_json()
              • to_jsonb()
          • Money
          • Numeric
          • Range
          • Serial
          • UUID
        • Functions and operators
          • Aggregate functions
            • Informal functionality overview
            • Invocation syntax and semantics
            • grouping sets, rollup, cube
            • Per function signature and purpose
              • avg(), count(), max(), min(), sum()
              • array_agg(), string_agg(), jsonb_agg(), jsonb_object_agg()
              • bit_and(), bit_or(), bool_and(), bool_or()
              • variance(), var_pop(), var_samp(), stddev(), stddev_pop(), stddev_samp()
              • linear regression
                • covar_pop(), covar_samp(), corr()
                • regr_%()
              • mode(), percentile_disc(), percentile_cont()
              • rank(), dense_rank(), percent_rank(), cume_dist()
            • case study—percentile_cont() and the "68–95–99.7" rule
            • case study—linear regression on COVID data
              • Download the COVIDcast data
              • Ingest the COVIDcast data
                • Inspect the COVIDcast data
                • Copy the .csv files to staging tables
                • Check staged data conforms to the rules
                • Join the staged data into a single table
                • SQL scripts
                  • Create cr_staging_tables()
                  • Create cr_copy_from_scripts()
                  • Create assert_assumptions_ok()
                  • Create xform_to_covidcast_fb_survey_results()
                  • ingest-the-data.sql
              • Analyze the COVIDcast data
                • symptoms vs mask-wearing by day
                • Data for scatter-plot for 21-Oct-2020
                • Scatter-plot for 21-Oct-2020
                • SQL scripts
                  • analysis-queries.sql
                  • synthetic-data.sql
          • currval()
          • lastval()
          • nextval()
          • Window functions
            • Informal functionality overview
            • Invocation syntax and semantics
            • Per function signature and purpose
              • row_number(), rank() and dense_rank()
              • percent_rank(), cume_dist() and ntile()
              • first_value(), nth_value(), last_value()
              • lag(), lead()
              • Tables for the code examples
                • table t1
                • table t2
                • table t3
                • table t4
            • case study—analyzing a normal distribution
              • Bucket allocation scheme
              • do_clean_start.sql
              • cr_show_t4.sql
              • cr_dp_views.sql
              • cr_int_views.sql
              • cr_pr_cd_equality_report.sql
              • cr_bucket_using_width_bucket.sql
              • cr_bucket_dedicated_code.sql
              • do_assert_bucket_ok
              • cr_histogram.sql
              • cr_do_ntile.sql
              • cr_do_percent_rank.sql
              • cr_do_cume_dist.sql
              • do_populate_results.sql
              • do_report_results.sql
              • do_compare_dp_results.sql
              • do_demo.sql
              • Reports
                • Histogram report
                • dp-results
                • compare-dp-results
                • int-results
          • yb_hash_code()
        • Keywords
        • Reserved names
      • YCQL
        • ALTER KEYSPACE
        • ALTER ROLE
        • ALTER TABLE
        • CREATE INDEX
        • CREATE KEYSPACE
        • CREATE ROLE
        • CREATE TABLE
        • CREATE TYPE
        • DROP INDEX
        • DROP KEYSPACE
        • DROP ROLE
        • DROP TABLE
        • DROP TYPE
        • GRANT PERMISSION
        • GRANT ROLE
        • REVOKE PERMISSION
        • REVOKE ROLE
        • USE
        • INSERT
        • SELECT
        • EXPLAIN
        • UPDATE
        • DELETE
        • TRANSACTION
        • TRUNCATE
        • Simple expressions
        • Subscripted expressions
        • Function call
        • Operators
        • BLOB
        • BOOLEAN
        • Collection
        • FROZEN
        • INET
        • Integer and counter
        • Non-integer
        • TEXT
        • DATE, TIME, and TIMESTAMP
        • UUID and TIMEUUID
        • JSONB
        • Date and time
        • BATCH
    • CLIs
      • yb-ctl
      • yb-docker-ctl
      • ysqlsh
      • ycqlsh
      • yb-admin
      • yb-ts-cli
      • ysql_dump
      • ysql_dumpall
    • Configuration
      • yb-tserver
      • yb-master
      • yugabyted
      • Default ports
    • Drivers and ORMs
      • JDBC Drivers
      • C# Drivers
      • Go Drivers
      • Python Drivers
      • Client drivers for YSQL
      • Client drivers for YCQL
    • Connectors
      • Kafka Connect YugabyteDB
    • Third party tools
      • Apache Superset
      • Arctype
      • DBeaver
      • DbSchema
      • pgAdmin
      • SQL Workbench/J
      • TablePlus
      • Visual Studio Code
    • Sample datasets
      • Chinook
      • Northwind
      • PgExercises
      • SportsDB
      • Retail Analytics
  • RELEASES
    • Releases overview
      • v2.13 series (preview)
      • v2.12 series (stable)
      • v2.11 series
      • v2.9 series
      • v2.8 series
      • v2.7 series
      • v2.6 series
      • v2.5 series
      • v2.4 series
      • v2.3 series
      • v2.2 series
      • v2.1 series
      • v2.0 series
      • v1.3 series
      • v1.2 series
    • Release versioning
  • FAQ
    • Comparisons
      • Amazon Aurora
      • Google Cloud Spanner
      • CockroachDB
      • TiDB
      • Vitess
      • MongoDB
      • FoundationDB
      • Amazon DynamoDB
      • Azure Cosmos DB
      • Apache Cassandra
      • PostgreSQL
      • Redis in-memory store
      • Apache HBase
    • General FAQ
    • Operations FAQ
    • API compatibility FAQ
    • YugabyteDB Anywhere FAQ
  • MISC
    • YEDIS
      • Quick start
      • Develop
        • Build an application
        • C#
        • C++
        • Go
        • Java
        • NodeJS
        • Python
      • API reference
        • APPEND
        • AUTH
        • CONFIG
        • CREATEDB
        • DELETEDB
        • LISTDB
        • SELECT
        • DEL
        • ECHO
        • EXISTS
        • EXPIRE
        • EXPIREAT
        • FLUSHALL
        • FLUSHDB
        • GET
        • GETRANGE
        • GETSET
        • HDEL
        • HEXISTS
        • HGET
        • HGETALL
        • HINCRBY
        • HKEYS
        • HLEN
        • HMGET
        • HMSET
        • HSET
        • HSTRLEN
        • HVALS
        • INCR
        • INCRBY
        • KEYS
        • MONITOR
        • PEXPIRE
        • PEXPIREAT
        • PTTL
        • ROLE
        • SADD
        • SCARD
        • RENAME
        • SET
        • SETEX
        • PSETEX
        • SETRANGE
        • SISMEMBER
        • SMEMBERS
        • SREM
        • STRLEN
        • ZRANGE
        • TSADD
        • TSCARD
        • TSGET
        • TSLASTN
        • TSRANGEBYTIME
        • TSREM
        • TSREVRANGEBYTIME
        • TTL
        • ZADD
        • ZCARD
        • ZRANGEBYSCORE
        • ZREM
        • ZREVRANGE
        • ZSCORE
        • PUBSUB
        • PUBLISH
        • SUBSCRIBE
        • UNSUBSCRIBE
        • PSUBSCRIBE
        • PUNSUBSCRIBE
    • Legal
      • Third party software
> Quick start >

3. Explore Yugabyte SQL

Report a doc issue Suggest new content
  • Set up the sample database
    • Open the YSQL shell
    • Create a database
    • Create the schema
    • Load data
  • Explore YugabyteDB
    • Self-joins
    • Recursive queries
    • LAG window functions
    • Cross table pivots
    • Pattern matches using regular expressions
    • GIN index JSON
    • GIN index text
    • Date intervals
    • Update and return values
    • Prepared statements
    • Stored procedures
    • Triggers
    • Create indexes
  • Next step
  • YSQL
  • YCQL

After creating a local cluster, you can start exploring YugabyteDB's PostgreSQL-compatible, fully-relational Yugabyte SQL API.

Set up the sample database

The examples in this tutorial use two tables, representing departments and employees. To start, use ysqlsh to create a database and schema, and insert data into the tables.

Open the YSQL shell

Using the YugabyteDB SQL shell, ysqlsh, you can connect to your cluster and interact with it using distributed SQL. ysqlsh is installed with YugabyteDB and is located in the bin directory of the YugabyteDB home directory.

  • macOS
  • Linux
  • Docker
  • Kubernetes

To open the YSQL shell, run ysqlsh.

$ ./bin/ysqlsh
ysqlsh (11.2-YB-2.1.0.0-b0)
Type "help" for help.

yugabyte=#

To open the YSQL shell, run ysqlsh.

$ ./bin/ysqlsh
ysqlsh (11.2-YB-2.1.0.0-b0)
Type "help" for help.

yugabyte=#

To open the YSQL shell, run ysqlsh.

$ docker exec -it yugabyte /home/yugabyte/bin/ysqlsh --echo-queries
ysqlsh (11.2-YB-2.1.0.0-b0)
Type "help" for help.

yugabyte=#

To open the YSQL shell (ysqlsh), run the following.

$ kubectl --namespace yb-demo exec -it yb-tserver-0 -- sh -c "cd /home/yugabyte && ysqlsh -h yb-tserver-0 --echo-queries"
ysqlsh (11.2-YB-2.1.0.0-b0)
Type "help" for help.

yugabyte=#

Create a database

To create a database (yb_demo), do the following:

  1. Enter the following CREATE DATABASE command:

    yugabyte=# CREATE DATABASE yb_demo;
    
  2. Connect to the new database using the ysqlsh \c meta command:

    yugabyte=# \c yb_demo;
    

Create the schema

The database for this tutorial includes two tables: dept for Departments, and emp for Employees.

Create the database schema by running the following commands.

CREATE TABLE IF NOT EXISTS public.dept (
    deptno integer NOT NULL,
    dname text,
    loc text,
    description text,
    CONSTRAINT pk_dept PRIMARY KEY (deptno asc)
);
CREATE TABLE IF NOT EXISTS emp (
    empno integer generated by default as identity (start with 10000) NOT NULL,
    ename text NOT NULL,
    job text,
    mgr integer,
    hiredate date,
    sal integer,
    comm integer,
    deptno integer NOT NULL,
    email text,
    other_info jsonb,
    CONSTRAINT pk_emp PRIMARY KEY (empno hash),
    CONSTRAINT emp_email_uk UNIQUE (email),
    CONSTRAINT fk_deptno FOREIGN KEY (deptno) REFERENCES dept(deptno),
    CONSTRAINT fk_mgr FOREIGN KEY (mgr) REFERENCES emp(empno),
    CONSTRAINT emp_email_check CHECK ((email ~ '^[a-zA-Z0-9.!#$%&''*+/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$'::text))
);

The emp table references the dept table through a foreign key constraint. The emp table also references itself through a foreign key constraint to ensure that an employee's manager is in turn an employee themselves.

The emp table uses constraints to ensure integrity of data, such as uniqueness and validity of email addresses.

Load data

Insert rows into the tables using multi-value inserts to reduce client-server round trips.

  1. Load data into the dept table by running the following command.

    INSERT INTO dept (deptno,  dname,        loc, description)
       values    (10,     'ACCOUNTING', 'NEW YORK','preparation of financial statements, maintenance of general ledger, payment of bills, preparation of customer bills, payroll, and more.'),
                 (20,     'RESEARCH',   'DALLAS','responsible for preparing the substance of a research report or security recommendation.'),
                 (30,     'SALES',      'CHICAGO','division of a business that is responsible for selling products or services'),
                 (40,     'OPERATIONS', 'BOSTON','administration of business practices to create the highest level of efficiency possible within an organization');
    
  2. Load data into the emp table by running the following command.

    INSERT INTO emp (empno, ename,    job,        mgr,   hiredate,     sal, comm, deptno, email, other_info)
       values   (7369, 'SMITH',  'CLERK',     7902, '1980-12-17',  800, NULL,   20,'SMITH@acme.com', '{"skills":["accounting"]}'),
                (7499, 'ALLEN',  'SALESMAN',  7698, '1981-02-20', 1600,  300,   30,'ALLEN@acme.com', null),
                (7521, 'WARD',   'SALESMAN',  7698, '1981-02-22', 1250,  500,   30,'WARD@compuserve.com', null),
                (7566, 'JONES',  'MANAGER',   7839, '1981-04-02', 2975, NULL,   20,'JONES@gmail.com', null),
                (7654, 'MARTIN', 'SALESMAN',  7698, '1981-09-28', 1250, 1400,   30,'MARTIN@acme.com', null),
                (7698, 'BLAKE',  'MANAGER',   7839, '1981-05-01', 2850, NULL,   30,'BLAKE@hotmail.com', null),
                (7782, 'CLARK',  'MANAGER',   7839, '1981-06-09', 2450, NULL,   10,'CLARK@acme.com', '{"skills":["C","C++","SQL"]}'),
                (7788, 'SCOTT',  'ANALYST',   7566, '1982-12-09', 3000, NULL,   20,'SCOTT@acme.com', '{"cat":"tiger"}'),
                (7839, 'KING',   'PRESIDENT', NULL, '1981-11-17', 5000, NULL,   10,'KING@aol.com', null),
                (7844, 'TURNER', 'SALESMAN',  7698, '1981-09-08', 1500,    0,   30,'TURNER@acme.com', null),
                (7876, 'ADAMS',  'CLERK',     7788, '1983-01-12', 1100, NULL,   20,'ADAMS@acme.org', null),
                (7900, 'JAMES',  'CLERK',     7698, '1981-12-03',  950, NULL,   30,'JAMES@acme.org', null),
                (7902, 'FORD',   'ANALYST',   7566, '1981-12-03', 3000, NULL,   20,'FORD@acme.com', '{"skills":["SQL","CQL"]}'),
                (7934, 'MILLER', 'CLERK',     7782, '1982-01-23', 1300, NULL,   10,'MILLER@acme.com', null);
    

You now have sample data and are ready to begin exploring YSQL in YugabyteDB.

Explore YugabyteDB

To display the schema of the emp table, enter the following shell meta-command:

yb_demo=# \d emp
                               Table "public.emp"
   Column   |  Type   | Collation | Nullable |             Default
------------+---------+-----------+----------+----------------------------------
 empno      | integer |           | not null | generated by default as identity
 ename      | text    |           | not null |
 job        | text    |           |          |
 mgr        | integer |           |          |
 hiredate   | date    |           |          |
 sal        | integer |           |          |
 comm       | integer |           |          |
 deptno     | integer |           | not null |
 email      | text    |           |          |
 other_info | jsonb   |           |          |
Indexes:
    "pk_emp" PRIMARY KEY, lsm (empno HASH)
    "emp_email_uk" UNIQUE CONSTRAINT, lsm (email HASH)
Check constraints:
    "emp_email_check" CHECK (email ~ '^[a-zA-Z0-9.!#$%&''*+/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$'::text)
Foreign-key constraints:
    "fk_deptno" FOREIGN KEY (deptno) REFERENCES dept(deptno)
    "fk_mgr" FOREIGN KEY (mgr) REFERENCES emp(empno)
Referenced by:
    TABLE "emp" CONSTRAINT "fk_mgr" FOREIGN KEY (mgr) REFERENCES emp(empno)

Self-joins

A self-join is a regular join where the table is joined with itself. The following statement matches employees with their manager and filters those that are earning more than their manager.

SELECT
    employee.ename,
    employee.sal,
    manager.ename AS "manager ename",
    manager.sal AS "manager sal"
FROM
    emp employee
JOIN emp manager ON
    employee.mgr = manager.empno
WHERE
    manager.sal<employee.sal
ORDER BY employee.sal;
 ename | sal  | manager ename | manager sal
-------+------+---------------+-------------
 FORD  | 3000 | JONES         |        2975
 SCOTT | 3000 | JONES         |        2975
(2 rows)

Recursive queries

The following example uses a recursive common table expression (CTE) to show the manager hierarchy. The emp_manager CTE is built using the WITH RECURSIVE clause to follow the hierarchy under JONES, down to the last level. The first subquery in the recursive clause starts at JONES. The second lists the employees who have JONES as a manager. They are declared with a UNION ALL and are executed recursively to get the other levels. The main query is then run on the CTE.

WITH RECURSIVE emp_manager AS (
    SELECT empno, ename, ename AS PATH
        FROM emp WHERE ename='JONES'
    UNION ALL
    SELECT emp.empno, emp.ename, emp_manager.path || ' manages ' || emp.ename
        FROM emp JOIN emp_manager ON emp.mgr = emp_manager.empno
)
SELECT * FROM emp_manager;
 empno | ename |               path
-------+-------+-----------------------------------
  7566 | JONES | JONES
  7788 | SCOTT | JONES manages SCOTT
  7902 | FORD  | JONES manages FORD
  7876 | ADAMS | JONES manages SCOTT manages ADAMS
  7369 | SMITH | JONES manages FORD manages SMITH
(5 rows)

LAG window functions

Use analytic window functions to compare the hiring time interval by department.

The following SQL statement uses WINDOW to define groups of employees by department, ordered by hiring date. The LAG window function is used to access the previous row to compare the hiring date interval between two employees. FORMAT builds text from column values, and COALESCE handles the first hire for which there is no previous row in the group. Without these window functions, this query would need to read the table twice.

SELECT
    dname,ename,job,
    coalesce (
        'hired '||to_char(hiredate -
        lag(hiredate) over (per_dept_hiredate),'999')||' days after '||
        lag(ename) over (per_dept_hiredate),
        format('(1st hire in %L)',dname)
    ) AS "last hire in dept"
    FROM emp JOIN dept USING(deptno)
    WINDOW per_dept_hiredate
    AS (PARTITION BY dname ORDER BY hiredate)
    ORDER BY dname,hiredate;
   dname    | ename  |    job    |      last hire in dept
------------+--------+-----------+------------------------------
 ACCOUNTING | CLARK  | MANAGER   | (1st hire in 'ACCOUNTING')
 ACCOUNTING | KING   | PRESIDENT | hired  161 days after CLARK
 ACCOUNTING | MILLER | CLERK     | hired   67 days after KING
 RESEARCH   | SMITH  | CLERK     | (1st hire in 'RESEARCH')
 RESEARCH   | JONES  | MANAGER   | hired  106 days after SMITH
 RESEARCH   | FORD   | ANALYST   | hired  245 days after JONES
 RESEARCH   | SCOTT  | ANALYST   | hired  371 days after FORD
 RESEARCH   | ADAMS  | CLERK     | hired   34 days after SCOTT
 SALES      | ALLEN  | SALESMAN  | (1st hire in 'SALES')
 SALES      | WARD   | SALESMAN  | hired    2 days after ALLEN
 SALES      | BLAKE  | MANAGER   | hired   68 days after WARD
 SALES      | TURNER | SALESMAN  | hired  130 days after BLAKE
 SALES      | MARTIN | SALESMAN  | hired   20 days after TURNER
 SALES      | JAMES  | CLERK     | hired   66 days after MARTIN
(14 rows)

Cross table pivots

Use a cross table to show the sum of salary per job, by department. The crosstabview shell command displays rows as columns. The following statement sums the salaries across jobs and departments and displays them as a cross table.

SELECT job, dname, sum(sal)
    FROM emp JOIN dept USING(deptno)
    GROUP BY dname, job
    \crosstabview
    job    | ACCOUNTING | SALES | RESEARCH
-----------+------------+-------+----------
 PRESIDENT |       5000 |       |
 CLERK     |       1300 |   950 |     1900
 SALESMAN  |            |  5600 |
 MANAGER   |       2450 |  2850 |     2975
 ANALYST   |            |       |     6000
(5 rows)

Pattern matches using regular expressions

Use regular expressions in an array to do pattern matching. The following statement matches all employees with @gmail or .org in their email address.

SELECT * FROM emp
    WHERE email ~ any ( ARRAY[ '@.*\.org$' , '@gmail\.' ] );
 empno | ename |   job   | mgr  |  hiredate  | sal  | comm | deptno |      email      | other_info
-------+-------+---------+------+------------+------+------+--------+-----------------+------------
  7876 | ADAMS | CLERK   | 7788 | 1983-01-12 | 1100 |      |     20 | ADAMS@acme.org  |
  7566 | JONES | MANAGER | 7839 | 1981-04-02 | 2975 |      |     20 | JONES@gmail.com |
  7900 | JAMES | CLERK   | 7698 | 1981-12-03 |  950 |      |     30 | JAMES@acme.org  |
(3 rows)

GIN index JSON

The employee skills are stored in a semi-structured JSON document. You can query them using the @>, ?, ?&, and ?| operators. For best performance, index them using a GIN index. GIN indexes provide quick access to elements inside a JSON document.

(GIN indexes are only available in YugabyteDB v2.11 or later. If you are using an earlier version, skip this scenario.)

  1. Create the GIN index on the JSON document.

    CREATE INDEX emp_skills ON emp USING gin((other_info->'skills'));
    

    This creates an index on the skills attributes in the other_info JSON column.

  2. Query on the JSON attribute list. SQL queries can navigate into the JSON document using -> and check if an array contains a value using @>. The following searches the employees with the "SQL" skill.

    SELECT * FROM emp WHERE other_info->'skills' @> '"SQL"';
    
  3. Explain the plan to verify that the index is used.

    explain SELECT * FROM emp WHERE other_info->'skills' @> '"SQL"';
    

    Thanks to the GIN index, this search doesn't need to read all documents.

GIN index text

GIN indexes also provide fast access to words inside text. The following creates an index for the simple-grammar vector of words extracted from the department description.

(GIN indexes are only available in YugabyteDB v2.11 or later. If you are using an earlier version, skip this scenario.)

  1. Create a text search index on the description column.

    CREATE INDEX dept_description_text_search ON dept
        USING ybgin (( to_tsvector('simple',description) ));
    
  2. Query on description for matching words. The following compares the simple-grammar vector of words extracted from the department description with a word search pattern to find the departments that contain "responsible" and "service" in their description.

    SELECT * FROM dept
        WHERE to_tsvector('simple',description) @@ to_tsquery('simple','responsible & services');
    
  3. Explain the plan to verify that the index is used.

    explain SELECT * FROM dept
        WHERE to_tsvector('simple',description) @@ to_tsquery('simple','responsible & services');
    

Thanks to the GIN index, this search doesn't need to read all rows and text.

Date intervals

Using arithmetic on date intervals, you can find employees with overlapping evaluation periods.

The interval data type allows you to store and manipulate a period of time in years, months, days, and so forth. The following example compares overlapping evaluation periods. The WITH clause defines the evaluation period length depending on the job.

WITH emp_evaluation_period AS (
    SELECT ename,deptno,hiredate,
        hiredate + CASE WHEN job IN ('MANAGER','PRESIDENT')
        THEN interval '3 month' ELSE interval '4 weeks'
        END evaluation_end FROM emp
    )
SELECT * FROM emp_evaluation_period e1
    JOIN emp_evaluation_period e2
    ON (e1.ename>e2.ename) AND (e1.deptno=e2.deptno)
    WHERE (e1.hiredate,e1.evaluation_end)
    overlaps (e2.hiredate,e2.evaluation_end);
 ename  | deptno |  hiredate  |   evaluation_end    | ename  | deptno |  hiredate  |   evaluation_end
--------+--------+------------+---------------------+--------+--------+------------+---------------------
 MILLER |     10 | 1982-01-23 | 1982-02-20 00:00:00 | KING   |     10 | 1981-11-17 | 1982-02-17 00:00:00
 TURNER |     30 | 1981-09-08 | 1981-10-06 00:00:00 | MARTIN |     30 | 1981-09-28 | 1981-10-26 00:00:00
 WARD   |     30 | 1981-02-22 | 1981-03-22 00:00:00 | ALLEN  |     30 | 1981-02-20 | 1981-03-20 00:00:00
(3 rows)

Update and return values

The UPDATE statement can compute a new value and return it without the need to query again. Using the RETURNING clause returns the new values in the same call. The following adds 100 to the salaries of all employees who are not managers and shows the new value.

UPDATE emp SET sal=sal+100
    WHERE job != 'MANAGER'
    returning ename,sal AS new_salary;
 ename  | new_salary
--------+------------
 SMITH  |        900
 ADAMS  |       1200
 WARD   |       1350
 KING   |       5100
 FORD   |       3100
 MARTIN |       1350
 JAMES  |       1050
 ALLEN  |       1700
 MILLER |       1400
 SCOTT  |       3100
 TURNER |       1600
(11 rows)

UPDATE 11

Prepared statements

Use a prepared statement with typed input to prevent SQL injection. A prepared statement declares parameterized SQL.

  1. Prepare the statement employee_salary with a parameterized query. The following prepared statement accepts the input of an employee number as an integer only, and displays the name and salary.

    prepare employee_salary(int) AS
        SELECT ename,sal FROM emp WHERE empno=$1;
    
    PREPARE
    
  2. Use the EXECUTE statement to execute a prepared statement. The following executes the prepared statement for the employee ID 7900.

    EXECUTE employee_salary(7900);
    
        ename | sal
    -------+------
    JAMES | 1050
    (1 row)
    
  3. Execute the same prepared statement with another value.

    EXECUTE employee_salary(7902);
    
    ename | sal
    -------+------
    FORD  | 3100
    (1 row)
    
  4. A prepared statement stays in the session until it is de-allocated. The following frees the memory used by this statement.

    DEALLOCATE employee_salary;
    
    DEALLOCATE
    

Stored procedures

A stored procedure encapsulates procedural logic into an atomic operation. Use stored procedures to encapsulate transactions with error handling. The following example creates a procedure in PL/pgSQL, named "commission_transfer", that transfers a commission "amount" from empno1 to empno2.

  1. Create the procedure with the business logic. The procedure has two SQL operations: decrease from empno1 and add to empno2. It also adds error checking to raise a custom exception if empno1 doesn't have sufficient funds to transfer.

    CREATE OR REPLACE PROCEDURE commission_transfer(empno1 int, empno2 int, amount int) AS $$
    begin
        update emp set comm=comm-commission_transfer.amount
            where empno=commission_transfer.empno1 and comm>commission_transfer.amount;
        if not found then raise exception 'Cannot transfer % from %',amount,empno1; end if;
        update emp set comm=comm+commission_transfer.amount
            where emp.empno=commission_transfer.empno2;
        if not found then raise exception 'Cannot transfer from %',empno2; end if;
    end;
    $$ language plpgsql;
    
    CREATE PROCEDURE
    
  2. Call the procedure with employee IDs and the amount to be transferred. The following CALL statement calls the stored procedure, with values for all parameters, transferring 100 from employee 7521 to employee 7654.

    CALL commission_transfer(7521,7654,100);
    
    CALL
    
  3. Check the result.

    SELECT * FROM emp WHERE comm IS NOT NULL;
    
    empno | ename  |   job    | mgr  |  hiredate  | sal  | comm | deptno |        email        | other_info
    -------+--------+----------+------+------------+------+------+--------+---------------------+------------
      7521 | WARD   | SALESMAN | 7698 | 1981-02-22 | 1350 |  400 |     30 | WARD@compuserve.com |
      7654 | MARTIN | SALESMAN | 7698 | 1981-09-28 | 1350 | 1500 |     30 | MARTIN@acme.com     |
      7499 | ALLEN  | SALESMAN | 7698 | 1981-02-20 | 1700 |  300 |     30 | ALLEN@acme.com      |
      7844 | TURNER | SALESMAN | 7698 | 1981-09-08 | 1600 |    0 |     30 | TURNER@acme.com     |
    (4 rows)
    
  4. Call the procedure with an amount that is not available. The following attempts to transfer 1000000, which is more than what 7521 has available. This raises the "Cannot transfer" error defined in the procedure, and automatically reverts all intermediate changes to return to a consistent state.

    CALL commission_transfer(7521,7654,999999);
    
    ERROR:  Cannot transfer 999999 from 7521
    CONTEXT:  PL/pgSQL function commission_transfer(integer,integer,integer) line 5 at RAISE
    

Triggers

Use triggers to automatically update data. This example uses a trigger to record the last time each row is updated automatically.

  1. Add a column to store the last update time.

    ALTER TABLE dept ADD last_update timestamptz;
    
    ALTER TABLE
    
  2. Add a function to set the last update time. The following query uses the transaction_timestamp() function, which returns the current date and time at the start of the current transaction. A stored procedure declares some procedural logic that returns a value. In this case, dept_last_update() returns the "new" state for a trigger after setting the "last_update" to the current time.

    CREATE OR REPLACE FUNCTION dept_last_update() RETURNS trigger AS $$
        begin
            new.last_update:=transaction_timestamp();
            return new;
        end;
    $$ language plpgsql;
    
    CREATE FUNCTION
    
  3. Use a trigger to call the procedure automatically. The following statement creates a trigger to execute the procedure on each row update of the dept table.

    CREATE trigger dept_last_update
        BEFORE UPDATE ON dept
        for each row
        EXECUTE PROCEDURE dept_last_update();
    
    CREATE TRIGGER
    
  4. Display the current state of the table. To verify the automatic logging of the last update time, the following displays the current state of departments before any update.

    SELECT deptno,dname,loc,last_update FROM dept;
    
    deptno |   dname    |   loc    | last_update
    --------+------------+----------+-------------
        10 | ACCOUNTING | NEW YORK |
        20 | RESEARCH   | DALLAS   |
        30 | SALES      | CHICAGO  |
        40 | OPERATIONS | BOSTON   |
    (4 rows)
    
  5. Update multiple rows in a single transaction. You can declare multiple updates in a single atomic transaction using BEGIN TRANSACTION and COMMIT. The following updates the location of departments 30 and 40 with a 3 second interval.

    BEGIN TRANSACTION;
    UPDATE dept SET loc='SUNNYVALE' WHERE deptno=30;
    SELECT pg_sleep(3);
    UPDATE dept SET loc='SUNNYVALE' WHERE deptno=40;
    COMMIT;
    
    BEGIN
    UPDATE 1
    pg_sleep
    ----------
    
    (1 row)
    UPDATE 1
    COMMIT
    
  6. Display the new state of the table.

    SELECT deptno,dname,loc,last_update FROM dept;
    
    deptno |   dname    |    loc    |          last_update
    --------+------------+-----------+-------------------------------
        10 | ACCOUNTING | NEW YORK  |
        20 | RESEARCH   | DALLAS    |
        30 | SALES      | SUNNYVALE | 2022-01-11 22:15:34.831474+00
        40 | OPERATIONS | SUNNYVALE | 2022-01-11 22:15:34.831474+00
    (4 rows)
    

In addition to the changed location, the last update timestamp has been automatically set. Although the updates were done at 3 second intervals, they show the same update time because they were run in the same atomic transaction.

Create indexes

Use indexes to query table values more efficiently.

  1. Create a table with randomly generated rows. You can use the GENERATE_SERIES function to generate rows. The following uses GENERATE_SERIES to create a table with 42 rows and a random value from 1 to 10.

    CREATE TABLE demo AS SELECT generate_series(1,42) num, round(10*random()) val;
    
    SELECT 42
    
  2. Create the index demo_val on the demo table. The following statement creates an index on val (hashed for distribution) and num in ascending order.

    CREATE INDEX demo_val ON demo(val,num);
    
    CREATE INDEX
    
  3. Use ANALYZE to gather optimizer statistics on the table. The query planner chooses the best access path when provided with statistics about the data stored in the table.

    analyze demo;
    
    ANALYZE
    
  4. Query the Top 3 numbers for a specific value.

    SELECT * FROM demo WHERE val=5 ORDER BY num FETCH FIRST 3 ROWS only;
    
    num | val
    -----+-----
      11 |   5
      35 |   5
    (2 rows)
    
  5. Use Explain to verify that no Sort operation is needed thanks to the index. When defining an index for a specific access pattern, verify that the index is used. The following shows that an "Index Only Scan" was used, without the need for an additional "Sort" operation.

    EXPLAIN ANALYZE SELECT * FROM demo WHERE val=5 ORDER BY num FETCH FIRST 3 ROWS only;
    
                                                            QUERY PLAN
    ---------------------------------------------------------------------------------------------------------------------------
    Limit  (cost=0.00..3.19 rows=3 width=12) (actual time=0.712..0.715 rows=2 loops=1)
      ->  Index Only Scan using demo_val on demo  (cost=0.00..4.47 rows=4 width=12) (actual time=0.710..0.713 rows=2 loops=1)
            Index Cond: (val = '5'::double precision)
            Heap Fetches: 0
    Planning Time: 0.086 ms
    Execution Time: 0.750 ms
    (6 rows)
    
  6. Clean up the table for this exercise.

    DROP TABLE IF EXISTS demo;
    
    DROP TABLE
    

Next step

Build an application

  • Set up the sample database
    • Open the YSQL shell
    • Create a database
    • Create the schema
    • Load data
  • Explore YugabyteDB
    • Self-joins
    • Recursive queries
    • LAG window functions
    • Cross table pivots
    • Pattern matches using regular expressions
    • GIN index JSON
    • GIN index text
    • Date intervals
    • Update and return values
    • Prepared statements
    • Stored procedures
    • Triggers
    • Create indexes
  • Next step
Ask our community
  • Slack
  • Github
  • Forum
  • StackOverflow
Yugabyte
Contact Us
Copyright © 2017-2022 Yugabyte, Inc. All rights reserved.