{ "cells": [ { "cell_type": "code", "execution_count": null, "id": "b2b1b1fa", "metadata": {}, "outputs": [], "source": [ "import agh_db_lectures\n", "agh_db_lectures.prepare_notebook_for_sql()" ] }, { "cell_type": "markdown", "id": "b8ba2bdc", "metadata": {}, "source": [ "# Indexes and optimization" ] }, { "cell_type": "code", "execution_count": null, "id": "1f301048", "metadata": {}, "outputs": [], "source": [ "%sql postgresql://demo_user:demo_pwd@localhost:25432/agh_it_northwind\n", "%config SqlMagic.feedback = False" ] }, { "cell_type": "markdown", "id": "d285644a", "metadata": {}, "source": [ "## Drive Access" ] }, { "cell_type": "markdown", "id": "b69e064f", "metadata": {}, "source": [ "Access to **persistent storage** is usually block-based." ] }, { "cell_type": "markdown", "id": "5734a9ed", "metadata": {}, "source": [ "![](magnetic-disk.svg)" ] }, { "cell_type": "markdown", "id": "8e69024e", "metadata": {}, "source": [ "_notes_\n", "\n", "In HDDs, bits of data are stored by polarizing the surface of a disk covered with magnetic material. Data bits are stored in circular tracks on the surface and tracks are divided into **sectors**.\n", "\n", "The disk rotates. Reads and writes of data bits are performed by a head that is mounted on a movable arm. When a read or write is to be peformed, the arm must first move (to position the head over the right track). The time the arm needs is called the **seek time**. The system must then wait (while the disk rotates) until the head is above the right sector of that track.\n", "\n", "512B sectors are typical for HDDs." ] }, { "cell_type": "markdown", "id": "69ebc46f", "metadata": {}, "source": [ "![](flash-nor-nand.svg)" ] }, { "cell_type": "markdown", "id": "2e0a7bc2", "metadata": {}, "source": [ "_notes_\n", "\n", "Flash memories are semiconductor-based and are used in USB thumb drives, SD cards, eMMCs, and SSDs.\n", "\n", "There are 2 main types of flash memories, NOR and NAND, named after logic gates. Data in NOR memories can be accessed in a word-based fashion. \"Word\" in computer science is a unit of memory that might have different size depending on the context but is generally in the order or magnitude of 2 bytes." ] }, { "cell_type": "markdown", "id": "ed9b2960", "metadata": {}, "source": [ "![](nand-writes.svg)" ] }, { "cell_type": "markdown", "id": "7fd492fe", "metadata": {}, "source": [ "_notes_\n", "\n", "Data in NAND memories is accessed in a block-based fashion. NAND memories are more cost-efficient and dominate the industry.\n", "\n", "512B - 4096B blocks are typical for SSDs.\n", "\n", "Note that we often say \"sectors\", \"blocks\", or \"pages\" and mean the same. Be aware, though, that in some specific context they might have slightly different meanings." ] }, { "cell_type": "markdown", "id": "969ee35e", "metadata": {}, "source": [ "![](blocks-managed-by-drive.svg)" ] }, { "cell_type": "markdown", "id": "cb1cbcfd", "metadata": {}, "source": [ "_notes_\n", "\n", "Both HDDs and SSDs have controllers inside that manage memory blocks. They can put aside blocks detected as damaged / worn out.\n", "\n", "Physically close data sectors can be retrieved faster in HDDs (saving on seek time). Multiple data items can be retrieved/modified faster if stored in the same sector (in case of both HDDs and SSDs)." ] }, { "cell_type": "markdown", "id": "ebc7ffc0", "metadata": {}, "source": [ "## Records Layout" ] }, { "cell_type": "markdown", "id": "e5049df1", "metadata": {}, "source": [ "Records can be stored in a file (or in a page) in different ways." ] }, { "cell_type": "markdown", "id": "7431e92e", "metadata": {}, "source": [ "### Fixed or Variable Size Records" ] }, { "cell_type": "markdown", "id": "1ec43bc5", "metadata": {}, "source": [ "![](fixed-size-records.svg)" ] }, { "cell_type": "markdown", "id": "28a82272", "metadata": {}, "source": [ "_notes_\n", "\n", "The storage overhead can be unnecessarily large due to the padding bytes.\n", "\n", "What if the padding byte (be it the ASCII space character or the null byte) is allowed to occur in the actual data?\n", "\n", "Fixed size records are less frequently used in practice." ] }, { "cell_type": "markdown", "id": "babb75c5", "metadata": {}, "source": [ "![](variable-size-records.svg)\n", "\n", "_(another, real-life approach is presented in the \"Postgres Page\" subsection)_" ] }, { "cell_type": "markdown", "id": "1683b321", "metadata": {}, "source": [ "_notes_\n", "\n", "The picture above represents just one possible way of table rows in variable length records." ] }, { "cell_type": "markdown", "id": "d9ccfdec", "metadata": {}, "source": [ "### Gaps" ] }, { "cell_type": "markdown", "id": "ea500cbc", "metadata": {}, "source": [ "![](record-gaps.svg)\n", "\n", "_(another, real-life approach is presented in the \"Postgres Page\" section)_" ] }, { "cell_type": "markdown", "id": "3e2cfa0b", "metadata": {}, "source": [ "_notes_\n", "\n", "Note that **regardless** of whether a DBMS uses fixed or variable size records, it has to handle row deletions. A likely approach is one that leaves free space (\"gaps\") after deleted records.\n", "\n", "Presence of gaps would make it impossible to process records in a file sequentially! As a solution, each record could include a pointer to the next non-deleted record in the file. This would enable sequential (**forward only**) processing of records." ] }, { "cell_type": "markdown", "id": "52a27e4d", "metadata": {}, "source": [ "## Data Layout in Postgres" ] }, { "cell_type": "markdown", "id": "d9c778c1", "metadata": {}, "source": [ "_notes_\n", "\n", "Postgres — 1GB (by default) segment files. Internally organized as 8 KB (by default) pages." ] }, { "cell_type": "code", "execution_count": null, "id": "1c7c6e01", "metadata": {}, "outputs": [], "source": [ "!ls /var/lib/postgresql/data/base" ] }, { "cell_type": "code", "execution_count": null, "id": "bc632690", "metadata": {}, "outputs": [], "source": [ "%%sql\n", "SELECT oid, datname FROM pg_database" ] }, { "cell_type": "markdown", "id": "db52527c", "metadata": {}, "source": [ "_notes_\n", "\n", "Each Postgres database has its own filesystem directory." ] }, { "cell_type": "code", "execution_count": null, "id": "6a1c2231", "metadata": {}, "outputs": [], "source": [ "!ls /var/lib/postgresql/data/base/16385" ] }, { "cell_type": "markdown", "id": "6a2f133d", "metadata": {}, "source": [ "_notes_\n", "\n", "Put the oid of the Northwind database in the command above." ] }, { "cell_type": "code", "execution_count": null, "id": "3271f479", "metadata": {}, "outputs": [], "source": [ "%%sql\n", "SELECT oid, relname, relfilenode FROM pg_class WHERE relname NOT LIKE 'pg_%'" ] }, { "cell_type": "markdown", "id": "f295978f", "metadata": {}, "source": [ "_notes_\n", "\n", "Each table and index is in a separate file.\n", "\n", "`_fsm` files store \"free space maps\" of tables (binary tree tracking free space in each page, one byte per page in leaf nodes).\n", "\n", "`_vm` files store \"visibility maps\" of tables (tacking of pages with no dead tuples, i.e., pages that are visible to all active transactions)." ] }, { "cell_type": "markdown", "id": "20fb17e3", "metadata": {}, "source": [ "![](postgres-page.svg)" ] }, { "cell_type": "markdown", "id": "19274ce0", "metadata": {}, "source": [ "_notes_\n", "\n", "Rows organized in a block as depicted.\n", "\n", "What if a row would be crossing block boundary? In particular, what about big objects (CLOB, BLOB)?\n", "\n", "Postgres uses special \"TOAST\" tables for out-of-line storing of big values." ] }, { "cell_type": "code", "execution_count": null, "id": "559e7c27", "metadata": {}, "outputs": [], "source": [ "%%sql\n", "-- COMMIT;\n", "VACUUM FULL;" ] }, { "cell_type": "markdown", "id": "a16cbed6", "metadata": {}, "source": [ "_notes_\n", "\n", "Postgres does not automatically reorder records within a page to eliminate gaps. Neither does it exploit every possibility of inserting a new record within a gap. The database administrator can request immediate repacking of records with the `VACUUM` command." ] }, { "cell_type": "markdown", "id": "d335963e", "metadata": {}, "source": [ "More: https://www.postgresql.org/docs/current/storage.html" ] }, { "cell_type": "markdown", "id": "3cff2ae1", "metadata": {}, "source": [ "## Index types" ] }, { "cell_type": "markdown", "id": "5a12990a", "metadata": {}, "source": [ "Data structures.\n", "\n", "- B-Tree\n", "- Hash\n", "- GiST Indexes\n", "- others…\n", "\n", "More: https://www.postgresql.org/docs/current/indexes-types.html" ] }, { "cell_type": "markdown", "id": "c93e9887", "metadata": {}, "source": [ "_notes_\n", "\n", "Indexes **might** speed up some queries and typically slow down data modifications (need to update the index as well as the data).\n", "\n", "Postgres is famous for its \"Generalized Search Tree\" indexes. They are useful for querying geospatial data, e.g., to find points that lie within a rectangle." ] }, { "cell_type": "markdown", "id": "2b004e03", "metadata": {}, "source": [ "## Sequential File Scan" ] }, { "cell_type": "markdown", "id": "2a003d19", "metadata": {}, "source": [ "![](no-index.svg)" ] }, { "cell_type": "markdown", "id": "cc8748ae", "metadata": {}, "source": [ "$\\sigma_{name > Monocled}(users)$" ] }, { "cell_type": "markdown", "id": "72b5779f", "metadata": {}, "source": [ "_notes_\n", "\n", "Consider a relation with users data. The basic operation that can be performed, regardless of the presence of idexes, is a **file scan**. It boils down to reading all record pages of the relation one by one.\n", "\n", "The cost of a file scan is easy to estimate based on the size of the relation's data." ] }, { "cell_type": "markdown", "id": "eeb79e63", "metadata": {}, "source": [ "## Clustered Index" ] }, { "cell_type": "markdown", "id": "24a3a575", "metadata": {}, "source": [ "Also called a **primary index**." ] }, { "cell_type": "markdown", "id": "f0a9e08a", "metadata": {}, "source": [ "![](dense-clustering-index.svg)" ] }, { "cell_type": "markdown", "id": "bc2f5ca6", "metadata": {}, "source": [ "$\\sigma_{name > Monocled}(users)$\n", "\n", "$\\sigma_{name < Monocled}(users)$" ] }, { "cell_type": "markdown", "id": "2986635b", "metadata": {}, "source": [ "_notes_\n", "\n", "Clustered index is one whose ordering matches the physical ordering of the data.\n", "\n", "A clustered index could be looked up once to speed up a \"greater than\" selection using its indexed attribute(s). The file would then be scanned sequentially from that point to the end to retrieve the records. For a B-tree index, we can estimate the maximum cost of index lookup as $n$ times the page read cost, where $n$ is the height of the B-tree. In practice, the cost is likely to be slightly lower, due to the tree root being already cached.\n", "\n", "A clustered index would not be used with a \"less than\" selection on its attribute(s). Such selection would be performed with a file scan starting from the beginning of the file. The operation would terminate once the first record outside the desired range is read.\n", "\n", "The difference in index usefulness in \"less than\" and \"greater than\" selections results from the assumption that file can only be scanned forward (which might be the case if records only contain pointers to subsequent records)." ] }, { "cell_type": "markdown", "id": "9785898f", "metadata": {}, "source": [ "## Non-Clustered Index" ] }, { "cell_type": "markdown", "id": "a7129e6c", "metadata": {}, "source": [ "Also called a **secondary index**." ] }, { "cell_type": "markdown", "id": "2370e9a2", "metadata": {}, "source": [ "![](2-dense-indexes.svg)" ] }, { "cell_type": "markdown", "id": "74bba5d4", "metadata": {}, "source": [ "_notes_\n", "\n", "A non-clustered index uses an ordering different from the physical ordering of the records.\n", "\n", "At any point in time there can exist at most one clustered index on a table. Multiple non-clustered indexes can exist simultaneously.\n", "\n", "A clustered index does **not have to** be made on the primary key attributes. It could be (and it might be the most frequent practice), but the physical ordering of the records **can also match the ordering of some other attribute(s) or none at all**." ] }, { "cell_type": "markdown", "id": "3f41b11e", "metadata": {}, "source": [ "## Sparse Index" ] }, { "cell_type": "markdown", "id": "97c261d7", "metadata": {}, "source": [ "![](sparse-and-dense-index.svg)" ] }, { "cell_type": "markdown", "id": "4bad783e", "metadata": {}, "source": [ "$\\sigma_{name=Rantoo}$" ] }, { "cell_type": "markdown", "id": "62762684", "metadata": {}, "source": [ "_notes_\n", "\n", "A clustered index can be sparse, e.g., only point to a subset of the records.\n", "\n", "The remaining records can be located by finding the closest record present in the sparse index and then scanning the file forward until the desired record is found.\n", "\n", "A non-clustered index has to be dense. Imagine that we wanted to make our non-clustered index on `registration_date` sparse. If we had removed any entry from it, it would not have been possible to easily locate it in the file using the index. A sequential file scan would have been required, defeating the purpose of an index." ] }, { "cell_type": "markdown", "id": "38dfc196", "metadata": {}, "source": [ "## Selection Speedup With a Non-Clustering Index" ] }, { "cell_type": "markdown", "id": "88c735a8", "metadata": {}, "source": [ "![](secondary-index-pages.svg)" ] }, { "cell_type": "markdown", "id": "272b6b4f", "metadata": {}, "source": [ "$\\sigma_{registration\\_date = 2024-04-15}$\n", "\n", "$\\sigma_{registration\\_date \\geq 2024-10-01}$" ] }, { "cell_type": "markdown", "id": "53991721", "metadata": {}, "source": [ "_notes_\n", "\n", "Consider a selection that should retrieve a single or a handful of records. It can leverage a secondary index to locate these records and only read the necessary data pages.\n", "\n", "Now, consider a selection that should retrieve half of the rows of some table. It might seem that a secondary index on the selection's attribute can be used to speed up this operation. But it might be the case that almost every page of the relation file does contain at least one of the desired rows. If almost all pages need to be read, then a sequential file scan is going to be just as fast as indexed lookup.\n", "\n", "Indexed lookup could be even worse if we were to retrieve the records in the order of the index. We would need to read page 4 for the first record, page 5 for the second, page 6 for the third and page 1 for the fourth… But later on we would need another record from page 4. In case it already got evicted from the cache, we would need to read it again, thus unoptimally performing more page reads than necessary.\n", "\n", "If we consider HDD drives (where additional time is required to seek to a particular page), such use of the index is even less optimal." ] }, { "cell_type": "markdown", "id": "8cfc0e6a", "metadata": {}, "source": [ "### Page Bitmap" ] }, { "cell_type": "markdown", "id": "a5dae613", "metadata": {}, "source": [ "|page | bitmap |\n", "|------|--------|\n", "|page 1| 1 |\n", "|page 2| 1 |\n", "|page 3| 0 |\n", "|page 4| 1 |\n", "|page 5| 1 |\n", "|page 6| 1 |\n", "|page 7| 1 |" ] }, { "cell_type": "markdown", "id": "8b0ec7b0", "metadata": {}, "source": [ "To counter that, a database system might first use the index to compute a bitmap of the pages that need to be accessed. Then, it can access the pages in the optimal order, without repeated reads of the same page.\n", "\n", "Since only 1 bit has to be stored for each page, the bitmap is likely to fit in the operational memory of the computer." ] }, { "cell_type": "markdown", "id": "87c771a0", "metadata": {}, "source": [ "## A Note on Record Pointers" ] }, { "cell_type": "markdown", "id": "e3e07004", "metadata": {}, "source": [ "Postgres index --> record\n", "\n", "Postgres index --> **page** of the record" ] }, { "cell_type": "markdown", "id": "943585f0", "metadata": {}, "source": [ "_notes_\n", "\n", "Indexes in Postgres (and likely many other DBMSes) only have pointers to pages. Records inside a page can be rearranged without affecting the indexes." ] }, { "cell_type": "markdown", "id": "9d546f84", "metadata": {}, "source": [ "## Use of Data Statistics" ] }, { "cell_type": "markdown", "id": "1debb1fb", "metadata": {}, "source": [ "![](statistics-to-planning.svg)" ] }, { "cell_type": "markdown", "id": "dc2b3ebb", "metadata": {}, "source": [ "```sql\n", "-- Show plan.\n", "EXPLAIN SELECT * FROM users WHERE registration_date >= '2025-10-01';\n", "```\n", "\n", "```sql\n", "-- Show plan and perform.\n", "EXPLAIN ANALYZE SELECT * FROM users WHERE registration_date >= '2025-10-01';\n", "```" ] }, { "cell_type": "markdown", "id": "d808e2de", "metadata": {}, "source": [ "_notes_\n", "\n", "The DBMS gathers statistics about data during normal operation (queries execution). It uses them to estimate the cost of different applicable query execution plans (say, it tries to answer \"How likely is it that this query will require reading over 95% pages in the relation's file?\").\n", "\n", "`EXPLAIN` is a common, non-standard SQL commad that presents the query execution plan that the DBMS would use for a query (we'll soon see some sample plans that `EXPLAIN` can produce).\n", "\n", "Under Postgres, the `ANALYZE` keyword can be used to have the DBMS additionally execute the query and show both the estimated and real execution costs.\n", "\n", "Postgres' `ANALYZE` can cause statitics to be updated. This means running `EXPLAIN ANALYZE` twice in a row can yield different results." ] }, { "cell_type": "markdown", "id": "b553ca23", "metadata": {}, "source": [ "## Nested-Loop Join Algorithm" ] }, { "cell_type": "markdown", "id": "e33145b1", "metadata": {}, "source": [ "$x \\bowtie_\\Theta y$\n", "\n", "```\n", "for each record block b_x in x:\n", " for each record block b_y in y:\n", " for each record r_x in b_x:\n", " for each record r_y in b_y:\n", " if theta(r_x, r_y) is true:\n", " yield {r_x, r_y};\n", "```" ] }, { "cell_type": "markdown", "id": "b71a2152", "metadata": {}, "source": [ "_notes_\n", "\n", "We shall use pseudocode to explain some of the algorithms.\n", "\n", "A generic join algorithm that can be used regardless of the presence of indexes.\n", "\n", "Has an **outer** and **inner** loop processing relations' data. The 2 additional loops inside our \"inner\" loop are of little interest to us now.\n", "\n", "If each relations' data is too large to fit in the available RAM, the inner relation's data shall be read multiple times.\n", "\n", "It is better to use smaller relation in the inner loop.\n", "\n", "The loop that reads blocks of the inner relation can process the each time in a different order, to better utilize the LRU caching of blocks.\n", "\n", "If our theta join is in fact an equi-join, an inner relation's index on join's attribute(s) can be used.\n", "\n", "If the inner relation can be kept entirely in memory, no block needs to be read twice." ] }, { "cell_type": "markdown", "id": "9e26239c", "metadata": {}, "source": [ "## External Sort-Merge Algorithm" ] }, { "cell_type": "markdown", "id": "d93ef94e", "metadata": {}, "source": [ "Also called \"N-way merge\"." ] }, { "cell_type": "markdown", "id": "8fdd354d", "metadata": {}, "source": [ "```\n", "N := 1;\n", "\n", "while there are relation blocks left to read:\n", " read as many record blocks as can be sorted in memory;\n", " sort records in memory;\n", " write records to run file nr N;\n", " N := N + 1;\n", "\n", "current_blocks[N];\n", "\n", "for I from 1 to N:\n", " current_blocks[I] := read the first block of run file I;\n", "\n", "while there are any records in current_blocks:\n", " I := index of the block with smallest record;\n", " yield the record from current_blocks[I];\n", " remove the record from current_blocks[I];\n", " if current_blocks[I] is empty:\n", " if there are more blocks in run file I:\n", " current_blocks[I] := read a block from run file I;\n", "```" ] }, { "cell_type": "markdown", "id": "b103d2f4", "metadata": {}, "source": [ "_notes_\n", "\n", "Sorting of data that fits in RAM is easy.\n", "\n", "Data that does not fit in RAM can be sorted by storing intermediate results in temporary files (\"run files\").\n", "\n", "If we have to little memory to hold N record blocks at once, we can perform the N-way merge sort in parts — first soft the data from some M run files to a new run file, then sort the data from the next M run files, etc. The M would be chosen so that M blocks can be loaded to memory at once.\n", "\n", "Query optimizer would account for the costs of reads and writes on run files.\n", "\n", "In general, sorting is expensive." ] }, { "cell_type": "markdown", "id": "2dd80350", "metadata": {}, "source": [ "## Equi-Join Algorithms" ] }, { "cell_type": "markdown", "id": "9d76ea70", "metadata": {}, "source": [ "$x \\bowtie_{x.a = y.a} y$\n", "\n", "$x \\bowtie y \\equiv \\Pi_{attributes\\ with\\ a\\ deduplicated}(x \\bowtie_{x.a = y.a} y)$" ] }, { "cell_type": "markdown", "id": "3cb605c2", "metadata": {}, "source": [ "_notes_\n", "\n", "A faster algorithm than the nested loop join may exist if the join condition is the equality of relation's respective attribute values.\n", "\n", "Note that a natural join is just an equi-join with a projection." ] }, { "cell_type": "markdown", "id": "dc4e1c12", "metadata": {}, "source": [ "### Merge-Join Algorithm" ] }, { "cell_type": "markdown", "id": "36a9b803", "metadata": {}, "source": [ "```\n", "sort both relations on the `a' attribute;\n", "\n", "S_x := read the initial records from x with the same value of `a' attribute;\n", "S_y := read the initial records from y with the same value of `a' attribute;\n", "\n", "loop:\n", " if S_x.a = S_y.a:\n", " yield all pairs of combined S_x and S_y;\n", " if there are no more records in x or y:\n", " break;\n", " S_x := read subsequent records from x with the same value of `a' attribute;\n", " S_y:= read subsequent records from y with the same value of `a' attribute;\n", " else if S_x.a < S_y.a:\n", " if there are no more records in x:\n", " break;\n", " S_x := read subsequent records from x with the same value of `a' attribute;\n", " else if S_y.a < S_x.a:\n", " if there are no more records in y:\n", " break;\n", " S_y:= read subsequent records from y with the same value of `a' attribute;\n", "```" ] }, { "cell_type": "markdown", "id": "9ee4bd5a", "metadata": {}, "source": [ "_notes_\n", "\n", "For larger relations avoids the need to read one of them multiple times.\n", "\n", "Incurs the overhead of sorting its inputs, unless they are already appropriately sorted.\n", "\n", "Instead of operating on sorted records, the DBMS can operate on sorted leaf nodes of B-tree indexes. Additional sorting of the intermediate result on record addresses shall then be needed to access the actual records in their physical order." ] }, { "cell_type": "markdown", "id": "efc82a9b", "metadata": {}, "source": [ "## Hash Join Algorithm" ] }, { "cell_type": "markdown", "id": "8cb44fb0", "metadata": {}, "source": [ "```\n", "N := number of blocks that fit in memory;\n", "N := N - 1;\n", "\n", "h := hash function that returns a number from 1 to N;\n", "\n", "buckets[N];\n", "\n", "for each record r_x in x:\n", " I := h(r_x.a);\n", " put r_x in buckets[I];\n", " if buckets[I] is a full block:\n", " write buckets[I] to run file x_I;\n", " empty buckets[I];\n", "write the remaining partially-filled buckets to run files x_*;\n", "\n", "empty buckets[*];\n", "\n", "for each record r_y in y:\n", " I := h(r_y.a);\n", " put r_y in buckets[I];\n", " if buckets[I] is a full block:\n", " write buckets[I] to run file y_I;\n", " empty buckets[I];\n", "write the remaining partially-filled buckets to run files y_*;\n", "\n", "delete buckets;\n", "\n", "procedure hash_join_in_memory(partition_file_x, partition_file_y):\n", " f := a suitable hash function (different from h above);\n", " Idx := in-memory hash index using f;\n", " for each record r_x in partition_file_x:\n", " add r_x to Idx;\n", " for each record r_y in partition_file_y:\n", " for each record r_x in Idx[r_y.a]:\n", " if r_x.a = r_y.a:\n", " yield {r_x, r_y};\n", " \n", "\n", "for I from 1 to N:\n", " yield the output of hash_join_in_memory(x_I, y_I);\n", "```" ] }, { "cell_type": "markdown", "id": "16228532", "metadata": {}, "source": [ "_notes_\n", "\n", "The first hashing has its buckets written to disk. The second hashing is performed on each bucket's records and is done in memory.\n", "\n", "If buckets produced by the first hashing are too large to be processed in memory, the algorithm can be extended to recursively partition them recursively, with results written to disk, until the partitions are small enough.\n", "\n", "In many cases it shall of course be possible to perform the entire hash-join in-memory, without the need for the algorithm part that writes records to run files.\n", "\n", "We built the index `Idx` on partitions of records from `x`. `x` is therefore called the **build input** and `y` is the **probe input**." ] }, { "cell_type": "markdown", "id": "9e0cdf96", "metadata": {}, "source": [ "## Operations With Complex Conditions" ] }, { "cell_type": "markdown", "id": "5e84491d", "metadata": {}, "source": [ "### Conjunction" ] }, { "cell_type": "markdown", "id": "19146378", "metadata": {}, "source": [ "$\\sigma_{\\Theta_1 \\land \\Theta_2 \\land \\Theta_3}(x)$\n", "\n", "If, e.g., an index $l$ is available for the $\\Theta_2$ condition:\n", "\n", "![](conjunction.svg)" ] }, { "cell_type": "markdown", "id": "6d6dca36", "metadata": {}, "source": [ "_notes_\n", "\n", "Otherwise, a sequential scan can be used.\n", "\n", "If many subconditions have compatible indexes, the DBMS can use statistics about data to determine which one to use (which one should return the smallest number of rows).\n", "\n", "Alternatively, the DBMS can compute a bit map of pages to access based on multiple indexes." ] }, { "cell_type": "markdown", "id": "3b2c1586", "metadata": {}, "source": [ "### Disjunction" ] }, { "cell_type": "markdown", "id": "7a6a03dd", "metadata": {}, "source": [ "$\\sigma_{\\Theta_1 \\lor \\Theta_2 \\lor \\Theta_3}(x)$\n", "\n", "If indexes $l_1$, $l_2$, and $l_3$ are available for the $\\Theta_1$, $\\Theta_2$, and $\\Theta_3$ conditions:\n", "\n", "![](disjunction.svg)" ] }, { "cell_type": "markdown", "id": "f1f7cb20", "metadata": {}, "source": [ "_notes_\n", "\n", "If the index for at least one subconditions is missing, sequential file scan becomes necessary.\n", "\n", "Set operations are also implemented with mergesort-like algoritms (hence can be expensive)." ] }, { "cell_type": "markdown", "id": "c5e06774", "metadata": {}, "source": [ "## Optimization With \"EXPLAIN\" Command" ] }, { "cell_type": "markdown", "id": "33957cf3", "metadata": {}, "source": [ "**Nonstandard**, likely to differ even between versions of the same DBMS." ] }, { "cell_type": "markdown", "id": "94239550", "metadata": {}, "source": [ "### MySQL Format\n", "\n", "From: https://dev.mysql.com/doc/refman/8.4/en/explain-output.html\n", "\n", "```sql\n", "EXPLAIN SELECT tt.TicketNumber, tt.TimeIn,\n", " tt.ProjectReference, tt.EstimatedShipDate,\n", " tt.ActualShipDate, tt.ClientID,\n", " tt.ServiceCodes, tt.RepetitiveID,\n", " tt.CurrentProcess, tt.CurrentDPPerson,\n", " tt.RecordVolume, tt.DPPrinted, et.COUNTRY,\n", " et_1.COUNTRY, do.CUSTNAME\n", " FROM tt, et, et AS et_1, do\n", " WHERE tt.SubmitTime IS NULL\n", " AND tt.ActualPC = et.EMPLOYID\n", " AND tt.AssignedPC = et_1.EMPLOYID\n", " AND tt.ClientID = do.CUSTNMBR;\n", "```\n", "\n", "May produce (in presence of appropriate indexes):\n", "\n", "```\n", "table type possible_keys key key_len ref rows Extra\n", "et ALL PRIMARY NULL NULL NULL 74\n", "tt ref AssignedPC, ActualPC 15 et.EMPLOYID 52 Using\n", " ClientID, where\n", " ActualPC\n", "et_1 eq_ref PRIMARY PRIMARY 15 tt.AssignedPC 1\n", "do eq_ref PRIMARY PRIMARY 15 tt.ClientID 1\n", "```" ] }, { "cell_type": "markdown", "id": "797cdb25", "metadata": {}, "source": [ "## Postgres Format" ] }, { "cell_type": "markdown", "id": "eb71286b", "metadata": {}, "source": [ "From: https://thoughtbot.com/blog/reading-an-explain-analyze-query-plan\n", "\n", "```sql\n", "EXPLAIN SELECT *\n", "FROM tenk1 t1, tenk2 t2\n", "WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2\n", "ORDER BY t1.fivethous;\n", "```\n", "\n", "May produce (in presence of appropriate indexes):\n", "\n", "\n", "```\n", " QUERY PLAN\n", "-------------------------------------------------------------------------------------------------\n", " Sort (cost=717.34..717.59 rows=101 width=488)\n", " Sort Key: t1.fivethous\n", " Sort Method: quicksort Memory: 77kB\n", " -> Hash Join (cost=230.47..713.98 rows=101 width=488)\n", " Hash Cond: (t2.unique2 = t1.unique2)\n", " -> Seq Scan on tenk2 t2 (cost=0.00..445.00 rows=10000 width=244)\n", " -> Hash (cost=229.20..229.20 rows=101 width=244)\n", " Buckets: 1024 Batches: 1 Memory Usage: 28kB\n", " -> Bitmap Heap Scan on tenk1 t1 (cost=5.07..229.20 rows=101 width=244)\n", " Recheck Cond: (unique1 < 100)\n", " -> Bitmap Index Scan on tenk1_unique1 (cost=0.00..5.04 rows=101 width=0)\n", " Index Cond: (unique1 < 100)\n", " Planning time: 0.194 ms\n", " Execution time: 8.008 ms\n", "```" ] }, { "cell_type": "markdown", "id": "9279e419", "metadata": {}, "source": [ "_notes_\n", "\n", "By adding `ANALYZE` after `EXPLAIN` we can additionally get information about the time really spend on query execution." ] }, { "cell_type": "markdown", "id": "80c891ec", "metadata": {}, "source": [ "## Index Creation" ] }, { "cell_type": "markdown", "id": "d059899b", "metadata": {}, "source": [ "```sql\n", "-- Postgres\n", "CREATE INDEX my_index ON some_table (some_colum);\n", "\n", "CREATE UNIQUE INDEX my_index\n", "ON some_table USING BTREE\n", "(some_colum * other_column);\n", "```" ] }, { "cell_type": "markdown", "id": "745c4b8c", "metadata": {}, "source": [ "```sql\n", "-- MySQL / MariaDB\n", "CREATE INDEX my_index ON some_table (some_colum);\n", "\n", "CREATE UNIQUE INDEX my_index\n", "ON some_table (some_colum * other_column, third_column)\n", "USING BTREE;\n", "```" ] }, { "cell_type": "markdown", "id": "f3f0dd85", "metadata": {}, "source": [ "### Automatic Index Creation" ] }, { "cell_type": "markdown", "id": "8b05d60c", "metadata": {}, "source": [ "`UNIQUE` and `PRIMARY KEY` constraints." ] }, { "cell_type": "markdown", "id": "fa232426", "metadata": {}, "source": [ "_notes_\n", "\n", "Databases typically automatically create indexes needed to verify the `UNIQUE` and `PRIMARY KEY` constraints.\n", "\n", "Under default configuration, MySQL and MariaDB make the primary key index (if present) a clustering index.\n", "\n", "In Postgres there are **no clustered indexes**.\n", "\n", "Postgres does have a `CLUSTER` command that orders table's data according to some index, but **it does not do what you'd expect**. It's a one-time operation and the order is not maintained afterwards." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "/gnu/store/q35bxk1i5blx0rcgxphhrq9xqmgha9bz-python-3.11.11/bin/python3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.11" } }, "nbformat": 4, "nbformat_minor": 5 }