Create my first dataset
The goal of this guide is to explore QuestDB's features and to interact with time series data and assumes you have an instance running. You can find guides to setup QuestDB on the introduction page. In this tutorial, you will learn how to:
As an example, we will look at hypothetical temperature readings from a variety of sensors.
info
All commands are run through the Web Console accessible at http://localhost:9000.
You can also run the same SQL via the Postgres endpoint or the REST API.
Creating a table#
The first step is to create tables. One table will contain the metadata of our sensors, and the other will contain the actual readings (payload data) from these sensors.
Let's start by creating the sensors table:
For more information about this statement, please refer to the CREATE TABLE reference documentation.
Inserting data#
Let's populate our sensors table with procedurally-generated data:
For more information about insert statements, refer to the INSERT reference documentation. To learn more about the functions used here, see the random generator and row generator pages.
Our sensors table now contains 10,000 randomly-generated sensor values of
different makes and in various cities. Use this command to view the table:
It should look like the table below:
| ID | make | city |
|---|---|---|
| 1 | Honeywell | Chicago |
| 2 | United Automation | Miami |
| 3 | Honeywell | Chicago |
| 4 | Omron | Miami |
| ... | ... | ... |
Let's now create some sensor readings. In this case, we will create the table and generate the data at the same time:
The query above demonstrates how to use the following features:
TIMESTAMP(ts)elects thetscolumn as a designated timestamp. This enables partitioning tables by time.PARTITION BY MONTHcreates a monthly partitioning strategy where the stored data is effectively sharded by month.
The generated data will look like the following:
| ID | ts | temp | sensorId |
|---|---|---|---|
| 1 | 2019-10-17T00:00:00.000000Z | 19.37373911 | 9160 |
| 2 | 2019-10-17T00:00:00.600000Z | 21.91184617 | 9671 |
| 3 | 2019-10-17T00:00:01.400000Z | 16.58367834 | 8731 |
| 4 | 2019-10-17T00:00:01.500000Z | 16.69308815 | 3447 |
| 5 | 2019-10-17T00:00:01.600000Z | 19.67991569 | 7985 |
| ... | ... | ... | ... |
Running queries#
Let's select all records from the readings table (note that SELECT * FROM is
optional in QuestDB):
Let's also select the count of records from readings:
| count |
|---|
| 10,000,000 |
and the average reading:
| average |
|---|
| 18.999217780895 |
We can now use the sensors table alongside the readings table to get more
interesting results using a JOIN:
The results should look like the table below:
| ID | ts | temp | sensorId | sensId | make | city |
|---|---|---|---|---|---|---|
| 1 | 2019-10-17T00:00:00.000000Z | 16.472200460982 | 3211 | 3211 | Omron | New York |
| 2 | 2019-10-17T00:00:00.100000Z | 16.598432033599 | 2319 | 2319 | Honeywell | San Francisco |
| 3 | 2019-10-17T00:00:00.100000Z | 20.293681747009 | 8723 | 8723 | Honeywell | New York |
| 4 | 2019-10-17T00:00:00.100000Z | 20.939263119843 | 885 | 885 | RS Pro | San Francisco |
| 5 | 2019-10-17T00:00:00.200000Z | 19.336660059029 | 3200 | 3200 | Honeywell | San Francisco |
| 6 | 2019-10-17T00:00:01.100000Z | 20.946643576954 | 4053 | 4053 | Honeywell | Miami |
The results should look like the table below:
| city | max |
|---|---|
| New York | 22.999998786398 |
| San Francisco | 22.999998138348 |
| Miami | 22.99999994818 |
| Chicago | 22.999991705861 |
| Boston | 22.999999233377 |
The results should look like the table below:
| ts | city | make | average |
|---|---|---|---|
| 2019-10-21T00:00:44.600000Z | Miami | Omron | 20.004285872098 |
| 2019-10-21T00:00:52.400000Z | Miami | Omron | 16.68436714013 |
| 2019-10-21T00:01:05.400000Z | Miami | Omron | 15.243684089291 |
| 2019-10-21T00:01:06.100000Z | Miami | Omron | 17.193984104315 |
| 2019-10-21T00:01:07.100000Z | Miami | Omron | 20.778686822666 |
| ... | ... | ... | ... |
For more information about these statements, please refer to the SELECT and JOIN pages.
Deleting tables#
We can now clean up the demo data by using DROP TABLE SQL. Be careful using
this statement as QuestDB cannot recover data that is deleted in this way: