数据重映射技术

I have a list of hashes/associative array and other nested objects such as hash of hashes etc. The sample data is in json format.

The Easy Part

From the above described complex data structure I’m only interested in a particular repeating {k,v} pairs, that can be re-structured, and can be iteratively passed on as a parameter to a remote process. The remote process performs an action on the value v and generates an output 'V'. The output 'V' can be mapped back to the 'k' as {k,V} - a fairly common problem illustrated as:

Iternation 1:

{k1,v1}=="Extract and Re-structure v1 for Input"==>(process)=="output"==>{V1}=="map to k1"==>{k1,V1}

Iteration 2:

{k2,v2}=="Extract and Re-structure v2 for Input"==>(process)=="output"==>{V2}=="map to k2"==>{k2,V2}

Iteration 3:

.......

The Tricky Part

The remote process has an additional functionality that allows it to ingest multiple values v, in a single call using a boundary delimiter (e.g. ':'.) Illustrated below:

Iteration 1:

{k1,v1},{k2,v2}=="Extract and Re-structure v1:v2 for Input"==>(process)=="output"==>{V1:V2}=="map to k1, k2"==>{k1,V1},{k2,V2}

Iteration 2:

{k3,v3},{k4,v4}=="Extract and Re-structure v3:v4 for Input"==>(process)=="output"==>{V3:V4}=="map to k3, k4"==>{k3,V3},{k4,V4}

Iteration 3:

.......

Approach?

One thing that comes to my mind is to use the 'map' functionality instead of iteration/cursor. What are some other techniques/methods for improving the performance of the "tricky" scenario? The Objective is to reduce the calls to the process but not at the cost of performance

Python or go boilerplate suggestions welcome

In python:

def passPairs(adict):
  k1, v1 = adict.popitem('k1')
  k2, v2 = adict.popitem('k2')
  V1, V2 = process(v1, v2) # corrected the case
  adict.update(zip((k1,k2), (V1, V2))