DynamoDB ConsistentRead用于全局索引

I have next table structure:

ID              string    `dynamodbav:"id,omitempty"`
Type            string    `dynamodbav:"type,omitempty"`
Value           string    `dynamodbav:"value,omitempty"`
Token           string    `dynamodbav:"token,omitempty"`
Status          int       `dynamodbav:"status,omitempty"`
ActionID        string    `dynamodbav:"action_id,omitempty"`
CreatedAt       time.Time `dynamodbav:"created_at,omitempty"`
UpdatedAt       time.Time `dynamodbav:"updated_at,omitempty"`
ValidationToken string    `dynamodbav:"validation_token,omitempty"`

and I have 2 Global Secondary Indexes for Value(ValueIndex) filed and Token(TokenIndex) field. Later somewhere in the internal logic I perform the Update of this entity and immediate read of this entity by one of this indexes(ValueIndex or TokenIndex) and I see the expected problem that data is not ready(I mean not yet updated). I can't use ConsistentRead for this cases, because this is Global Secondary Index and it doesn't support this options. As a result I can't run my load tests over this logic, because data is not ready when tests go in 10-20-30 threads. So my question - is it possible to solve this problem somewhere? or should I reorganize my table and split it to 2-3 different tables and move filed like Value, Token to HASH key or SORT key?

GSIs are updated asynchronously from the table they are indexing. The updates to a GSI typically occur in well under a second. So, if you're after immediate read of a GSI after insert / update / delete, then there is the potential to get stale data. This is how GSIs work - nothing you can do about that. However, you need to be really mindful of three things:

  1. Make sure you keep your GSI lean - that is, only project the absolute minimum attributes that you need. Less data to write will make it quicker.
  2. Ensure that your GSIs have the correct provisioned throughput. If it doesn't, it may not be able to keep up with activity in the table and therefore you'll get long delays in the GSI being kept in sync.
  3. If an update causes the keys in the GSI to be updated, you'll need 2 units of throughput provisioned per update. In essence, DynamoDB will delete the item then insert a new item with the keys updated. So, even though your table has 100 provisioned writes, if every single write causes an update to your GSI key, you'll need to provision 200 write units.

Once you've tuned your DynamoDB setup and you still absolutely cannot handle the brief delay in GSIs, you'll probably need to use different technology. For example, even if you decided to split your table into multiple tables, it'll have the same (if not worse) impact. You'll update one table, then try to read the data from another table and you haven't yet inserted the values into a different table.

I suspect that once you tune DynamoDB for your situation, you'll get pretty damn close you what you want.