Can I use azure table storage for conditional insertion
Can I use windows azure table storage service for conditional insertion?
Basically, what I want to do is insert new rows / entities into the partition of the table storage service, and there are no changes in the partition if and only when I last viewed it
If you want to know, I will consider event procurement, but I think the problem is more common than this
Basically, I want to read part or the whole partition and make decisions based on the data content In order to ensure that there are no changes in the partition since the data is loaded, the insertion should be like normal optimistic concurrency: the insertion will succeed only if there are no changes in the partition - no rows are added, updated or deleted
Usually in rest services, I want to use Etag to control concurrency, but as far as I know, partitions do not have Etag
The best solution I can think of is to maintain a single row / entity for each partition in the table, including a timestamp / Etag, and then make all inserted parts into a batch composed of inserts and condition updates Timestamp entity ' However, it sounds a bit troublesome and fragile
Can azure table storage service be implemented?
Solution
A thousand foot view
I might share a little story with you... Once upon a time, someone wanted to keep events for aggregation (famous from Domain Driven Design) in response to a given command This person wants to ensure that aggregation is created only once and that any form of optimistic concurrency can be detected To solve the first problem - aggregation should only be created once - he inserts a transaction media into the transaction media, which will throw when duplicate aggregation (or, more accurately, its primary key) is detected What he inserts is the aggregate identifier as the unique identifier of the primary key and change set When processing commands, the event set generated by aggregation is the meaning of change set If someone or something defeats him, he will consider the aggregation that has been created and stay there The changeset will be pre - stored in the media of his choice The only commitment that this medium must make is to reply to the stored content in time when asked Any failure to save a changeset is considered a failure of the entire operation In order to solve the second problem - optimistic concurrency is detected in the further life cycle of aggregation - after writing another change set, he will update the aggregation record in the transaction medium, If and only if no one updates it behind the scenes (i.e. compared with what he last read before executing the command). If this happens, the transaction media will notify him. This will cause him to restart the whole operation and re read the aggregation (or its change set) to make the command successful Of course, now he has solved the problem of writing, followed by the problem of reading How can I read all the changesets that make up the aggregation of its history? After all, he has only the last committed changeset associated with the aggregation identifier in the transaction media So he decided to embed some metadata in each change set In metadata - not uncommon as part of a changeset - will be the identifier of the last submitted changeset In this way, he can change his summary "by line", just like a link list As an additional privilege, he also stores the command message identifier as part of the metadata of the change set In this way, when reading the change set, he can know in advance whether the command he will execute on the set has become a part of its history Everything is going well
Attachment: 1 Transactional media and changeset storage media can be the same, 2 Changeset identifier cannot be a command identifier, 3 Make holes in the story at will: -), 4 Although it is not directly related to azure table storage, I successfully implemented the above story using AWS dynamodb and AWS S3