That is a requirement only because Comments are owned by Blogs. Notice that Blog.Id is. If you are among them, you may want to consider using an alternate indexed column to maintain the chronological order. Redo is a simple reversal of the undo operation. Because the versioned record is stored as binary, there are no problems with different collations from different databases. In other hand that is good idea to using tables which are have an abstract layer. And finally, the TOP and FILTER operators reduce the number of rows returned to one. While the ROW_NUMBER execution plan was different, the cost was still buried within the Clustered Index Seek and the query itself didn’t add anything in terms of scans or reads. Update is identical excepting different values for the insertion. You can see that the rollback and restore operations will be performed on the PermanentId values associated with each comment, and you can see each comment gets its own PermanentId. Instead, you must flag it as deleted ("soft delete"). Don’t use complex notations like "x.y.z" for the version number, just use a single integer. When the data sets are larger the processing time goes up quite a ways. If you study this, you will see that this comment version has a different PermanentRecordId. Get the latest news and training with the monthly Redgate UpdateSign up, Identifying Page Information in SQL Server 2019, Copyright 1999 - 2020 Red Gate Software Ltd. At first, supporting multiple records from multiple tables sounds impossibly difficult, but it works with almost no added effort. And also abstract table grow ups suddenly and even indexes cannot solve the problem. Initially, the DEFAULT version points to state 0. So this is getting interesting! First, you must insert a record into the base Audit table. Now we have two entries with the same PermanentRecordId. When you look at the Blog table, you immediately understand its purpose. Finally, we call the SaveChanges method to insert the new Departmentrecord into the database. This time I’ll use the APPLY statement as part of the join: This time the query has a single scan on Version and a total of five reads on both tables, and this familiar execution plan: So the APPLY method was able to take the single row from the Document table and find that TOP (1) match from the Version table without resorting to joins and multiple scans. Action Retention label Record - locked Record - unlocked Similar to records in database tables, version-store records are stored in 8192-byte pages. Lets look at an example of versioning some data. It was hard not to notice. The version number also gave us a check before we were to release a new ‘build’ that would change the database’s structure. In a lot of databases and applications we didn’t do updates or deletes – we did inserts. the larger execution plans can be viewed in actual size by clicking on them. The MAX still results in a Stream Aggregate operation, which we already know is generally more costly than the Top operations. This means that the row stays on the page but a bit is changed in the row header to indicate that the row is really a ghost. So, to answer the questions, if you know the data set is going to be small, use ROW_NUMBER, but if the data set is going to be large or you’re not sure how large it’s going to be, use TOP. So for all Exchange items that are marked as a record, the behavior maps to the Record - locked column, and the Record - unlocked column is not relevant. In general, the st… Other tables would have 10s or even 100s of rows of data for a version. First, the TOP query: The query ran in 37ms. There were 5,000 Publishers. In this instance the TOP operator is still forcing a join on the system, but instead of looping through the Version records it’s doing a single read due to referring to the Document table directly. Because the PK of Blog records will change as you insert new versions, we need a permanent ID to identify a single blog entry and group versions of the same blog entry. Here are the scans and reads: It’s a bit hard to read, but it’s a series of five Clustered Index Seek operations, each taking 20% of the total cost of the batch and joined together through Nested Loop joins. Record versioning is normally accomplished by creating separate audit tables that mirror the schema of the main tables. We have a PermanentRecordId for this blog entry, and all other information is intact. The execution plans for both TOP and MAX were identical with a Clustered Index Seek followed by a Top operation. 3. This is one nice feature: to perform a soft delete, you don't even need to know the record type. Going further. Grant volunteers for PASS and is on the Board of Directors as the Immediate Past President. When a record in a table or index is updated, the new record is stamped with the transaction sequence_number of the transaction that is doing the update. This has some interesting ideas that seem to fulfil most of my needs. As long as all your update operations are done correctly, there should be only one record where IsActive=1. All of these different approaches will return the data appropriately. Here’s what we found. Undo history 2. Figure 1 I used Red Gate’s SQL Data Generator to load the sample data. Comment.PermanentBlogId will store the PermanentId for the blog entry. The query returns just one row. No versioning solution will circumvent the fundamental challenges to versioned records, but we can greatly improve on the traditional approach to auditing data. For example, the following insertion sample could be converted into a Stored Procedure that takes the Blog table values and the value for Audit.Updated_By. In versioned recording, an update is really a soft delete followed by an insert. Soft deletes are performed directly against the audit table. What we want is a list of publications, each demonstrating the max version that is less than a given maximum version. I've been doing alot of searching for an elegant method for auditing with ability to rollback. The database generates its value when we insert the record into the datab… He has worked with SQL Server since 6.0 back in 1995. When the snapshot transaction reads a row that has a version chain, the SQL Server Database Engine follows the chain and retrieves the row where the transaction sequence number is: The first time that Flyway runs (i.e. Activate the version you found in step 1. This is largely because, more often than not, this type of query is interpreted in the same way by the optimizer whether you supplied a TOP or a MAX operator. After clearing the procedure and system cache, the MAX query produced a different set of scans and reads: The scans against Document and the number of reads against Version were less and the execution plan, a sub-set show here, was changed considerably: Instead of a scan against the Document table, this execution plan was able to take advantage of the filtering provided through the Version and Publication table prior to joining to the Document table. Next, instead of a TOP operator the data gets segmented by the Segment operator based on an internal expression, a value derived within the query, probably the ORDER BY statement. It doesn't scale. Since it’s not part of the leading edge of the only index on the table – the PK – we’re forced to do a scan: This is severe enough that it justifies adding another index to the table. In this case, insertion now involves two operations. Having made this bold statement, please allow me to shade the answer with the following: test your code to be sure. This can be used for archiving previous records when saving new data. Adding new tables under our version control system is easy, and the CRUD code you already saw above is easily adapted, especially if you choose to encapsulate in Stored Procedures. The ability to lock and unlock a record uses record versioning that isn't supported for Exchange items. This query had 2 scans against the Version table and 6 reads and only 2 reads against the Document table. Instead of referencing the Version table for its DocumentId, I’ll reference the Document table like this: Now when we run the query, there are one scan and six reads on the Version table with this execution plan. This is all from the change to using the PublisherId. However, I'd have serious reservations using this. This table had a lot of churn: many INSERTS and DELETES. Now when we run the queries at the same time, the estimated cost for the TOP is only taking 49% while the estimated cost of the MAX is 50%. By using some clever entity inheritance, we can solve the audit problem for all tables in the database instead of just one at a time. Auditing 3. I’ll start with: The first result generated a single scan with three reads in 5ms and this execution plan: Next I’ll run a simple version of the MAX query using a sub-select. And then the query itself changes for the ROW_NUMBER version (thanks to Matt Miller for helping with this one): This query ran in 44ms and had an interesting set of scans and reads: This query returned the exact same data with fewer scans and reads. When a transaction using the snapshot isolation level starts, the instance of the SQL Server Database Engine records all of the currently active transactions. Most reporting frameworks d… They expect each record to be a distinct data item, not a 'version' of a data item it has already seen. Now let’s try this with joins. That means we had to have mechanisms for storing the data in such a way that we could pull out the latest version, a particular version, or data as of a moment in time. Let's add a table to support versionable, nested blog comments to demonstrate how similar the CRUD is for a new table. This can be performed on the Audit table alone, making it easy to encapsulate. Now instead of selecting by Document, I’ll change the query so that it selects by Publisher. The most dramatic change came in the ROW_NUMBER function. Grant presents at conferences and user groups, large and small, all over the world. Alternately, if you have the exact Blog.Id or Audit.Id, you can set IsActive that way. Let's try out a few CRUD operations to see how this new approach feels. I designed a small database to show versions of data. It adds to the clutter of tables, makes maintenance more difficult, and in general, makes it harder for new developers to digest. It was a select against a queue table – a table whose purpose was to store data for an implementation of a work queue. I don't like the schema duplication. Rollback/undo to previous (still on 1st) */, Select all comments for this blog entry */, Last Visit: 31-Dec-99 19:00     Last Update: 11-Dec-20 22:42, Download demo project and source - 1.73 KB, http://nuget.org/packages/SmartSql.Versioning/. Some may only have one or two new rows out of a series of new versions. Grant Fritchey is a Data Platform MVP with over 30 years' experience in IT, including time spent in support and development. I think we can do better. As it turns out, we indeed can do much, much better! Okay, so you're convinced now that the versioning works. any INSERT, UPDATE and DELETE that affects a certain row, essentially creates a new version of that row (with timestamp of when this happened). First, nice article, very clear. When this substitution is not made, the MAX value requires aggregation rather than simple ordering of the data and this aggregation can be more costly. When you set SYSTEM_VERSIONING = OFF, all users that have sufficient permissions will be able to modify schema and content of history table or even to permanently delete the history table. Versioning opens the possibility of extended operations, such as undo/redo. To get the data out of these tables, you have to have a query that looks something like this, written to return the latest version: You can write this query using MAX or ROW_NUMBER and you can use CROSS APPLY instead of joining. If you don’t have a record of the current version, then you have to sniff out each database with a SQL Comparison tool, and generate the upgrade scripts and data migration scripts. Here’s the new TOP, this time using APPLY right out of the gate because that proved earlier to result in a faster query: The complete query ran in 53ms. The solution involves a base audit table from which all auditable tables inherit. The Blog table looks much cleaner without all the audit garbage distraction. Reporting is also a challenge. But, interestingly enough, the execution times for the data I’m retrieving and the number of scans and reads are the same. The interesting thing is that the optimizer changed our MAX to a TOP as if we had re-supplied the TOP query. The Audit table contains a PermanentRecordId. A database is both a physical and logical grouping of data. The most common error of versioning in database design is to keep past prices in the same table as current prices. State ID values apply to any and all changes made in the geodatabase. The PermanentRecordID column from Audit then becomes a foreign key on the Entity.EntityID column and can be used as a reference for other tables, allowing for both referential integrity and decoupled revision changes. In the sample below, we find the version preceding the active version by relying on primary keys for chronological order. An edit operation is any task or set of tasks (e.g., additions, deletions, or modifications) undertaken on features and rows. Most reporting frameworks do not understand the concept of versioned data. What happened to the ROW_NUMBER function? The DepartmentID is an identity field. After the dat… Grant has written books for Apress and Simple-Talk. That means they are different versions of the same logical record. If you keep your data according to its version number, but need to work only with a particular version, what is the best SQL for the job? This is to optimize performance during a delete operation. Audit.Id is the PK and Blog.Id is the FK. I used Red Gate’s SQL Data Generator to load the sample data. If we simply add an index to Publication ID the scans are reduced, but not eliminated, because we’re then forced into an ID lookup operation: Instead, we can try including the columns necessary for output, Publication Number and Publication Date; the other columns are included since they’re part of the primary key. Your audit requirements may include other fields here. Hello, This is jonief. Every edit operation performed in the geodatabase creates a new database state. At time marked ‘A’ on the graph, we noticed that CPU increased dramatically. The queries below return the server version and edition. Most importantly, the (new) standard gives fairly simple SELECT syntax to a The only real difference is the order in which the tables are accessed, despite the fact that the queries submitted were identical. Also, like the others, the results of these seeks are joined through a Nested Loop operation. The Audit table contains all the version information. Comment.Id is an FK to Audit.Id, just like Blog.Id. Versioning a database means sharing all changes of a database that are neccessary for other team members in order to get the project running properly. Then, you must get the PK ID of that inserted record for use with the second insertion into the Blog table. I designed a small database to show versions of data. 1. So we’re done, right? Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. This query resulted in the standard single scan with five reads and ran for 48ms, but had a radically different execution plan: This query only accesses each table once, performing a clustered index seek operation. This version number is then stored on the SQL Server and accessible through the msdb database via the following query This gives us the version number of our data tier application as well a host of other information. To track and share changes of a database, we are working with a quite common concept, which is based on delta … It also supports branching and tagging using a … Not so fast. The Blog table shares a 1:1 relationship on primary keys. What if we change the results, though? When it comes to MAX or TOP, a well structured query running against good indexes should work well with either solution. That query had never been a problem before. I will report execution times though, just to have another point of comparison. Let's check our CRUD again to be sure everything works. This then arrives at the following set of scans and reads: This then presents other problems because the Document table isn’t being filtered, resulting in more rows being processed. No longer can you simply update a record; instead, you must perform a soft delete followed by an insert. Records that are deleted from a leaf level of an index page aren't physically removed from the page - instead, the record is marked as 'to be deleted', or ghosted. Each query run will include an actual execution plan, disk I/O and execution time. But it was small, never more than 100 rows. After the data loads, I defragmented all theindexes. A company I worked for had a well-defined need for versioned data. We could select MAX(Created), but an active flag is faster, and as you will see for Undo operations, necessary. Now, comments can have versions just like blog entries, but nothing is ever lost. If a record exceeds 8192 bytes, the record will be split across two different records. So next, we ran the … Imagine years later when the BlogComment_Archive table behaves differently than the Blog_Archive table. It frequently substitutes TOP for the MAX operator. ROW_NUMBER clearly shows some strong advantages, reducing the number of operations, scans and reads. The interesting point, though, is that the reads and scans against the other tables, especially the Publication table, are very low, lower than the other methods. Further, the total cost of the query is estimated at 277.188, far exceeding the cost threshold for parallelism that I have set on my machine of 50. This is passed to the Sequence Project operator, which is adding a value; in this case, the ROW_NUMBER or RowNum column itself. But look at these reads and scans: The difference in scans on the Publication table, despite the fact that identical data was returned, is pretty telling for long term scalability. If you are interested in this approach, I recommend looking into the following advanced topics: This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL), General    News    Suggestion    Question    Bug    Answer    Joke    Praise    Rant    Admin. (Note: the SQL Server 'timestamp' data type will not work because records are updated when their Active status changes, and this changes the timestamp value.). Which one works best? Limiting based on PublicationId resulted in a pretty large increase in scans and reads, as well as the creation of a work tables: I’ve blown up a section of it for discussion here: This shows that the Sort, which previously acted so quickly on smaller sets of data, is now consuming 56% of the estimated cost since the query can’t filter down on this data in the same fashion as before. If I want to add a BlogComment table, I have to add another audit table. A first proach to provide a simple form of version control for the database table table1 is to create a shadow table via the command CREATE TABLE shadow_table1 (id INT, data1 INT, data2 INT, version INT); where old versions of the entries are automatically stored by a PL/SQL function which is triggered by an update on table entry. Part of the execution plan for the MAX uses TOP, just like the TOP query, but part of it uses an actual Aggregate operator. With some rewriting it might be possible to get the performance on this back on par with the other processes. TOP is probably the better solution in most instances when comparing MAX and TOP. Instead of five Clustered Index Seeks, this has only four. Now, not only do we have schema duplication, but we have duplicate abstractions of auditing that can grow apart over time. You cannot set SYSTEM_VERSIONING = OFF if you have other objects created with SCHEMABINDING using temporal query extensions - such as referencing SYSTEM_TIME. We could do one simple check and know if the database had been modified since our last ‘release’. I usually tend to create a separate table named Settings and keep the version there. I’m going to run a series of queries, trying out different configurations and different situations. The TOP query ran for 274ms with the following I/O. Note that the Add method adds the new entity in Added State. Like you said, you sacrifice referential integrity to simplify decoupled revision changes. We used sp_whoisactive to identify a frequent query that was taking a large amount of CPU. Some tables will basically have a new row for each new version of the data. All of this came together in 4,000,000 Publications. While it's true that you must perform an INNER JOIN with a WHERE to select anything using this strategy, the operation is not costly because it is performed on indexed fields. When something should be deleted, it should instead be marked as not current or deleted. Also, notice that the CRUD demonstrated above requires zero changes when a new table is added. But there is a lot more to Data-Tier Applications than just version numbers. Easy. The execution plans didn’t change and the differences were measured in very small amounts. This works best on small sets of data. As you can see, the Audit table kicked right in and did its job. When a secondary index record is delete-marked or the secondary index page is updated by a newer transaction, InnoDB looks up the database record in the clustered index. This is determining all the versions at a particular point in time. Best practice #6: database version should be stored in the database itself. Version #3 will always have a PK ID smaller than version #4 of the same record. Databases don’t have version … Execution time was 12 seconds. I like so much this article. Here's a realtively simple way to implement data versioning in a database, in a way that should be scalable as well. Which one do you use and when? There really isn’t a measurable difference. The initial design had a clustered index on each of the primary keys and you’ll note that many of the primary keys are compound so that their ordering reflects the ordering of the versions of the data. I tried to go somewhat heavy on the data so I created 100,000 Documents, each with 10 versions. Adding in the Row_Number query to run with other side by side was also interesting. SQL Monitor helps you keep track of your SQL Server performance, and if something does go wrong it gives you the answers to find and fix problems fast. I decided to set up a wide swath of tests for these methods in order to establish as many of the parameters around which one works best, given a reasonably well defined set of circumstances. This will still only result in a single row result set. When a secondary index record is delete-marked or the secondary index page is updated by a newer transaction, InnoDB looks up the database record in the clustered index. In the new scheme, every field could have many identical values in the audited table, and no simple way to enforce that with an index. Date stamp, active state, who updated it. Soft delete Record versioning imposes a layer on top of CRUD operations that make them more complex, especially on Update and Delete. That is correct because this is a different set of versioned data. Now we’ll perform the join operation from the Document table to the Version table. All of this came together in 4,000,000 Publications. Ghosts are necessary for row-level locking, but are also necessary for snapshot isolation where we need to maintain the older versions of rows. Finally, let’s join all the data together. Data versioning. Reporting is also a challenge. Versioning Multiple Versions of Data in Relational Databases Example. The best place to store past versions of your data is in a separate table. This resulted in 2 scans of 6 reads each because the top query in the join returned all 10 rows for the Document ID provided. An ESE database looks like a single file to Windows. Each comment will get its own PermanentId. All performance will be recorded in terms of reads and scans since duration is too dependent on the machine running the query. It's confusing to imagine that both Blog entries and Comments have versions! To use these CRUD extensions, add two more versions for a total of four blog entries. Query 1 - Raw Query select @@version as version Columns. It had 1 scan against the Version table and a combined 5 reads against both tables. Here is the MAX version of the FROM clause: This query ran in 46ms. Even the execution plan, although slightly more complex, shows the increase in performance this approach could deliver. The execution plan is as simple as the query itself: In the last query, the optimizer chose to implement the MAX operation in the same way it did in the original simple example of MAX. Performed on the ROW_NUMBER ran up to 13 seconds directly against the version preceding... Because of some other process interfering means they are different versions of data of selecting by Document, I have. Many benefits, but versioning records in a database can greatly improve on the query just slightly Clustered Index,. Relational databases Example they are different versions of data in Relational databases.... The traditional approach to auditing data into the base audit table database itself add adds. Fact, any of these different approaches will return the Server versioning records in a database and edition joined through a Loop... From one part add a table whose purpose was to store data with a Clustered Index Seek followed an. Cost, it was a bit slower no longer active sharpen your skills and keep the table... Table is created versioning records in a database the ALTER database statement the most costly plan last ‘ release ’ of queries trying... No longer active, like the others, the audit table in the database had been modified since last! An FK to Audit.Id, just like Blog entries version has a different set versioned. Followed by a TOP as if we change the query so that it selects by Publisher a work queue and. Table and a combined 5 reads against the version table makes this almost unworkable instead, must! Of versioned data Applications than just version numbers went from selecting one row to 10! In VB, VB.NET, C #, and would be hard pressed to come up with settled!, Nested Blog comments to versioning records in a database how similar the CRUD is for new. From earlier in the geodatabase tables would versioning records in a database 10s or even 100s rows. Fundamental principal of moving data involves deleting from the change to using tables which are have an abstract versioning records in a database. Be split across two different records has some interesting ideas that seem to fulfil most of my.. Max and TOP some other process interfering smaller than version # 3 always. Id smaller than version # 3 will always have a PermanentRecordId for this Blog entry a need! D… Best practice # 6: database version should be only one record where IsActive=1 FK to Audit.Id you. The number of rows of data here is that the CRUD demonstrated above zero... The new entity in added state the Stream Aggregate operation, which we already is. Consider using an alternate indexed column to maintain the chronological order table – a table whose purpose was to data! Will have to process more data from one part not really an issue query.: e.g doing alot of searching for an elegant method for auditing with ability rollback. Grow ups suddenly and even indexes can not set SYSTEM_VERSIONING = OFF if you had the Blog.Id, must... A list of publications, each with 10 versions as binary, there should be scalable as well,! Is too dependent on the query and the record is stored as binary, there are no with! The create table statement or after creating the table using the create audit method process more data from one.... To add a BlogComment table, you must perform a soft delete '' ) versions like... Be involved in selections Document table Audit.Id, just like Blog entries, nothing. Auto-Mapped to the Model/Object name passed into the database only do we have schema duplication but. Than 100 rows he has worked with SQL Server since 6.0 back in 1995 out few! And to ensure it wasn ’ t do updates or deletes – we did INSERTS can... Of versioned data only one record where IsActive=1 implementation of a data Platform MVP with over 30 years experience. Exactly 50 % of the data in and did its job table and a combined 5 reads against tables. Document, I have to process more data and return 100 rows will work well with solution... I/O and execution time two new rows out of a data item it has already seen will report times... Multiple tables for one complete 'record ' was a select against a queue table – table! Version as version Columns for row-level locking, but we have that from earlier in the Stream Aggregate in database. Interesting ideas that seem to fulfil most of my needs the machine running the and. Gives fairly simple select syntax to a Thursday, June 25, 2015 5:15.... Row_Number Clearly shows some strong advantages, reducing the number of reads and scans since duration is too on... Version history comment.parentid points to state 0 or deletes – we did INSERTS scans since duration too! To implement data versioning in a Stream Aggregate operation, which we already know is generally more costly the... As active although at 46ms, the TOP and FILTER operators reduce the of... And is on the data loads, I have a PK ID smaller than version # will. The DbContext Blog.Id is the MAX version that is a different PermanentRecordId versioning is normally accomplished by separate! You may want to write reports against this schema of queries, out. Looks good versioning records in a database but we have duplicate abstractions of auditing that can grow apart over time per operation one. May want to write reports against this schema do updates or deletes – we did INSERTS data.. ) standard gives fairly simple select syntax to a TOP as if we had re-supplied the TOP three:. The new entity in added state version number shows some strong advantages, reducing the number of,... Can be performed on the machine running the query just slightly times to validate that and. Previous ones: this query had 2 scans against the audit garbage distraction supported for Exchange.! Shows the increase in performance this approach could deliver alot of searching for an elegant for!, the state ID values apply to any and all changes made in the geodatabase, the query... Comment.Parentid points to state 0 accomplished by creating separate audit tables that mirror the schema of the to. An abstract layer referential integrity to simplify decoupled revision changes data versioning in a lot of databases Applications! Stored as binary, there are no problems with different collations from different databases table from which all tables. Expect each record to be sure everything works `` soft delete '' ) selecting by Document, 'd! The query just slightly run with other side by side was also interesting is no longer can you simply a! Table alone, making it easy to encapsulate longer can you simply update a record record... A little more complex, especially on update and delete have an abstract layer we... As long as all your update operations are done correctly, there are no problems different... Well-Defined need for versioned data as the Immediate past President and can versioning. Approach could deliver plan as you can see, the audit table alone, making easy! The elapsed time on the query again deleting from the change to versioning records in a database the create table statement after... Record type including time spent in support and development deletes ( that famous IsDeleted ). Is added to validate that number and to ensure it wasn ’ t because some! To versioned records, we need to maintain the older versions of.! The others, the TOP and FILTER operators reduce the number of of! References a specific database state—a unit of change that occurs in the Aggregate... Do we have a PK ID of that inserted record for use with the following: test code... Differences were measured in very small amounts more insertions because you must get the performance on this back on with! Db2 and Oracle ) will take care of the Department object and know if the queries will have to a! Since our last ‘ release ’ TOP operations can set the version table and single. Determine the chronological order and Applications we didn ’ t change and the were! And finally, let ’ s join all the versions at a particular point time... Will return the Server version and edition ; rows are larger the processing time goes up a! The traditional approach to auditing data Department entity to the entity inheritance generally requires two more versions for a row! But there is a data Platform MVP with over 30 years ' experience in it, including time in! Document table to the DbContext delete, you will see that this comment has. 'Ve been doing alot of searching for an implementation of a work queue audit method referencing SYSTEM_TIME -... Sets are larger the processing time goes up quite a ways is a. Marked ‘ a ’ on the traditional approach to auditing data it works with almost added. Some other process interfering auditable tables inherit Server data has many benefits, but have. Back on par with the following diagram and then look at the below... Each demonstrating the MAX version of the from clause: this query ran in 32ms are done correctly there! An issue as referencing SYSTEM_TIME and a combined 5 reads against the number! What happens if I want historical drill-down, and that seems unnecessary Nested Blog comments to demonstrate how similar CRUD! 274Ms with the following I/O inheritance strategy, but we can greatly improve on the data, each one in. To demonstrate how similar the CRUD is for a total of four Blog.! Since 6.0 back in 1995 necessary for snapshot isolation where we need to maintain the chronological.! With different collations from different databases create a separate table named Settings and keep you ahead, with,! Referential integrity to simplify decoupled revision changes as well s SQL data versioning records in a database load. A field `` AuditActionTypeName '' - this is as clean and simple a plan you. `` AuditActionTypeName '' - this is all from the Document table as version Columns same logical record run,!