Ditto Basics
CRUD Fundamentals
ditto includes a robust query engine that allows you to perform various filter operations to carry out traditional create , read , update , and delete (crud) operations the ditto sdk provides a comprehensive set of methods and functions to facilitate data interaction and perform a wide range of operations in your app for read operations, use the find and observelocal methods for modifications, use the update , upsert , evict , and delete methods overview the following tables provides a high level overview of the different ways you can perform crud in ditto operation description /#create using the upsert method, either insert a new document or updating an existing document for a given document id /#read using either the find or observelocal methods, retrieve documents based on the specific criteria you pass as parameters in your function /#update using either update or upsert methods, write changes to ditto /#delete using either remove or evict methods, delete data in ditto in addition, using a soft delete pattern , indicate data as deleted without physically removing it from ditto for detailed information on crud, see platform manual > docid 1cexfnswhjeqa r6yyl 8 for an overview of the various operators and path navigations you can use to construct sophisticated queries in your app, see platform manual > docid\ mmtismykr3hxzmys1bi2w create due to ditto's conflict free concurrency model, there is no concept of "insert " this is because each peer functions under the assumption that its write transaction already exists somewhere within the mesh network therefore, ditto uses a combined approach known as "upsert " this approach focuses on updating only the fields within the document that have changed, the delta (see docid\ hfwwxtojepjkcxqkp4pvr ) if all of the fields in the document are new, however, ditto creates an entirely new document object (see platform manual > docid 1cexfnswhjeqa r6yyl 8 ) upserting documents with the upsert method, you can execute any of the following actions in your app action description apply delta updates to existing documents write changes to only the specific fields within the document that are modified (see platform manual > docid\ l2hfjvzuyynu5ahzxhwaj insert a document if all of the fields are new, create a new document object (see platform manual > docid\ l2hfjvzuyynu5ahzxhwaj ) load initial data upsert and flag data you want to be accessible to end users at app startup, such as sample chat messages from a central backend api (see platform manual > docid\ c v5q9jyhmvemvsdipj2u ) supply a document id when creating a new document, if desired, you can assign your own unique identifier otherwise, ditto automatically generates and assigns one for you (see platform manual > docid\ l2hfjvzuyynu5ahzxhwaj ) read to retrieve data in ditto, depending on your goals and use case, use any of the following query types local query — using the find and findbyid methods, quickly get your own data in a one time executable to your local ditto store for instance, call the findbyid method to target a specific document or if you want to fetch one or more documents based on certain criteria and conditions, call the find method instead (see /#local queries ) live query — using the observelocal method, establish a listener to observe your local changes in realtime (see /#live queries ) replication query — using the subscribe method, keep your data consistent with other peers connected in the mesh network (see /#replication queries ) local queries similar to a traditional database query, the find and findbyid methods are based on a local query that fetches and returns all relevant documents intended for quick access to data stored locally on its own device, such as a profile image, local queries are one time executables that do not involve other peers connected in the mesh network for more information and how to instructions, see platform manual > docid 1cexfnswhjeqa r6yyl 8 and docid\ hvfmp8tx4cwki4h2xotuy live queries a live query subscription is essentially a local query, but it includes an observelocal method to establish continuous listening to real time changes written to its own ditto store live queries are useful when you want to monitor changes from your local ditto store and react to them immediately for instance, when your end user updates their own profile, you can asynchronously display the changes to the end user in realtime for more information and how to instructions, see platform manual > docid 1cexfnswhjeqa r6yyl 8 and docid\ hvfmp8tx4cwki4h2xotuy replication queries a replication query is executed asynchronously through the subscribe method on each remote peer connected within the mesh network this query specifies the data for which your small peer local ditto store is interested in receiving updates when remote peers make modifications to the data you've indicated an interest in, they publish the changes back to you — the subscribing originating peer rather than continuously polling for updates, which is resource intensive and generally inefficient, the asynchronous listener you set up triggers only when the data matching your query undergoes changes in the ditto store update the following table provides an overview of the crdts and associated behavior for a given operation operation description set register sets the value for a given field in the document set map sets value for a given field in the map remove register removes a value for a given field in the document remove map removes a value for a given key in the map structure replace with counter converts a number value for a given field into a counter increment counter unlike a number , increments the counter by the given positive integer value decrement counter unlike a number , decrements the counter by the given negative integer value updating a single document to update a single document, using the upsert method specify the document collection you want to make an update to pass your field changes only delta changes are written to local ditto stores; if the ditto store is already up to date, the changes do not commit for more information, see docid\ wtubn9e3adovxzsy6zb7w do { // upsert json compatible data into ditto let docid = try ditto store\["people"] upsert(\[ "name" "susan", "age" 31 ]) } catch { //handle error print(error) }val docid2 = ditto store\["people"] upsert( mapof( "name" to "susan", "age" to 31 ) ) js const cdocid = await ditto store collection('people') upsert({ name 'susan', age 31, }) console log(docid) // "123abc"map\<string, object> content = new hashmap<>(); content put("name", "susan"); content put("age", 31); dittodocumentid docid = ditto store collection("people") upsert(content); // docid => 507f191e810c19729de860eavar docid = ditto store collection("people") upsert( new dictionary\<string, object> { { "name", "susan" }, { "age", 31 }, } );json person = json({{"name", "susan"}, {"age", 31}}); documentid doc id = ditto get store() collection("people") upsert(person);let person = json!({ "name" "susan" to string(), "age" 31, }); let collection = ditto store() collection("people") unwrap(); let id = collection upsert(person) unwrap(); updating multiple documents if you want to perform writes to multiple documents, start the transaction asynchronously to avoid blocking the main thread for example, in swift, use dispatchqueue global , as demonstrated in the following snippet for more information, see the platform manual > docid\ c v5q9jyhmvemvsdipj2u swift dispatchqueue global(qos default) async { ditto store write { transaction in let scope = transaction scoped(tocollectionnamed "passengers \\(thisflight)") // loop inside the transaction to avoid writing to database too frequently self passengers foreach { scope upsert($0 dict) } } }val results = ditto store write { transaction > val cars = transaction scoped("cars") val people = transaction scoped("people") val docid = "abc123" people upsert(mapof(" id" to docid, "name" to "susan")) cars upsert(mapof("make" to "hyundai", "color" to "red", "owner" to docid)) cars upsert(mapof("make" to "jeep", "color" to "pink", "owner" to docid)) people findbyid(dittodocumentid(docid)) evict() }const results = await ditto store write(async (transaction) => { const cars = transaction scoped('cars') const people = transaction scoped('people') // in this example a new person and car document are created, and // finally the person document that was just created is evicted // if any of these operations fail, all others are not applied const susanid = await people upsert({ name 'susan', }) await cars upsert({ make 'hyundai', color 'red', owner susanid, }) await people findbyid(susanid) evict() }) // the return value of a transaction is a list that contains a // summary of all operations in the transaction and the document ids // that were affected // results == \[ // { // type 'inserted', // docid documentid { }, // collectionname 'people' // }, // { // type 'inserted', // docid documentid { }, // collectionname 'cars' // }, // { // type 'evicted', // docid documentid { }, // collectionname 'people' // } // ]dispatchqueue global(qos default) async { ditto store write { transaction in let scope = transaction scoped(tocollectionnamed "passengers \\(thisflight)") // loop inside the transaction to avoid writing to database too frequently self passengers foreach { scope upsert($0 dict) } } }dispatchqueue global(qos default) async { ditto store write { transaction in let scope = transaction scoped(tocollectionnamed "passengers \\(thisflight)") // loop inside the transaction to avoid writing to database too frequently self passengers foreach { scope upsert($0 dict) } } }auto results = ditto get store() write(\[&]\(writetransaction \&write txn) { scopedwritetransaction people = write txn scoped("people"); scopedwritetransaction cars = write txn scoped("cars"); auto docid = "abc123"; people upsert({{"name", "susan"}, {" id", documentid(docid)}}); cars upsert({{"make", "hyundai"}, {"owner", documentid(docid)}}); cars upsert({{"make", "toyota"}, {"owner", documentid(docid)}}); });dispatchqueue global(qos default) async { ditto store write { transaction in let scope = transaction scoped(tocollectionnamed "passengers \\(thisflight)") // loop inside the transaction to avoid writing to database too frequently self passengers foreach { scope upsert($0 dict) } } } delete managing the amount of data stored and replicated across resource constrained small peers interconnected in a bandwidth‑limited mesh is crucial for maintaining optimal performance in a peer‑to‑peer environment in distributed system architecture, you must strike a balance between data availability and system efficiency the greater the amount of data replicated across connected peers, the more timely offline read access becomes the fewer the number of data replicated across connected peers, the less likelihood that peer devices run out of disk space and experience memory leaks evict and remove depending on your use case, use either the evict or remove method to implement memory management practices like automatic resource allocation, memory deallocation, cleaning and maintenance, upon many other tools to help optimize memory usage in your app balancing syncing and evicting given these technical tradeoffs, use the subscribe and eviction methods carefully to implement your tradeoff design decisions to sync more data across peers connected in the mesh, call subsribe to remove data stored locally on a peer device, call evict evicting a document is not permanent; as long as there is at least one active subscription with a query that includes an evicted document, that document will reappear as soon as it is available in the mesh controlling which documents sync you can signify that data is irrelevant for peer to peer replication but should still be retained locally by adding issafetoevict to the document field property tree { " id" "abc123", "color" "red", "mileage" 40000, "issafetoevict" true, "createdat" "2023 05 22t22 24 24 217z" } to ensure that small peers continue syncing documents that are considered relevant, include issafetoevict == false in their subscription queries and then use some means to inform clients to flag any documents that they consider irrelevant that way, only the document that a client sets to 'true' is prevented from syncing once flagged, the clients purge the irrelevant documents from their caches, all the while normal transactional operations continue without interruption collection find("createdat > "'2023 05 22t22 24 24 217z" && issafetoevict == false") subscribe() permanently removing data the remove method, once invoked, permanently deletes specified documents from the local datastore as well as all other connected peers use the remove method with extreme caution; invoking remove results in irreversible data loss collection findbyid(docid) remove()collection findbyid(docid) remove()await ditto store collection("your collection name") findbyid("unique document id") remove()ditto store collection("your collection name") findbyid("unique document id") remove();ditto store collection("your collection name") findbyid("unique document id") remove()ditto get store() collection("your collection name") find by id("unique document id") remove()collection find by id(id) remove() unwrap(); flagging soft deletes if you need to ensure that, although deleted, the data remains recoverable, you can add a soft delete pattern to the document property tree json { " id" "123abc", "name" "foo", "isarchived" true } for comprehensive information on deleting data in ditto, see platform manual > docid\ q6vsbhczgypt1hlzgow4e