There’s a bit of a conundrum when it comes to scaling blockchain in supply chains. How can we scale its use without jeopardising either efficiency or security? After all, blockchain success is dependent on its ability to scale and grow. The network must be able to adapt, flex and expand in how it is used without compromising the starting-point level of security and performance.
Bear in mind that supply chains represent two-thirds of global GDP in terms of where capital lies. Therefore, performance must be exemplary and not compromised by growth itself.
The key is a consensus algorithm that is able to validate all transactions as they are made.
- As an object moves through a supply chain, an Ambrosus hardware sensor travels with it. This sensor then monitors the transition process from start to finish.
- The object, now connected to the sensor, is then tracked with real-time data concerning the object, being sent in to the network.
The data that is collected by the sensors is noted either as an ‘Event’ or an ‘Asset’. The Asset is a unique digital IP captured within blockchain. It’s the physical object in digital terms. The event is what happens to the Asset on its journey. For example, it may be a change in temperature, gauged by the sensor.
From here, the data travels from the sensor to the blockchain, through ‘Meta-data’ as a unique ‘hash’.
This all sounds quite convoluted. However, a hash is simply an alphanumeric code that is unique to each data entry (each time something ‘happens’). Meta-data, however, is about the unique ID, the author, and a time stamp combined with the hash itself.
A string or combination of these pieces of meta-data form a ‘Bundle’. This can include thousands of Assets and Events.
The Bundle itself is sent to 7 Atlas nodes, where it is stored. This Bundle is effectively stamped into the blockchain and it cannot be edited or removed. It therefore provides accurate unchangeable tracking and quality assurance.
Furthermore, the information stored through blockchain in this way is publicly accessible. An individual holding an object which has travelled and been recorded in this way, could technically view all the data pertaining to that object.
On a simple level, we can see how this is straightforward for one fairly generic ‘object’, such as a can of tomatoes, undergoing just one event.
However, this rapidly becomes considerably less straightforward when we’re looking at much more vast quantities of both Assets and the Events they go through. Accurately storing and collating such vast reams of data suddenly becomes a great deal more at risk of security concerns, or simply, breakdowns and bottlenecks.
How to solve these problems
Given this is a very real problem if we want to scale the use of blockchain in supply chains, the likes of AMB-NET are facing this head-on. For example, with our above example, Apollo Nodes are specific permission-based nodes that are responsible for validating each and every transaction as it goes through the Ambrosus blockchain. This means that only permitted Apollo Nodes, with the relevant authority, can validate the data that is to be recorded.
This means that at points when the information is in need of verification, the Apollo Nodes ‘connect’ and ‘agree’ on which data to record as the most valid. This is a complicated process, which thus becomes simplified to create one new block. Blocks are generated every 5 seconds and will contain somewhere in the region of 50 transactions.
However, this is still clearly inferior in terms of what would actually be needed on an international scale.
To cope with this scaling issue, AMB-NET uses decentralised gateways known as Hermes Nodes. These can gather up to 16,384 Events and Assets in to just one Bundle, which is then written to the blockchain in just one hit. This becomes a case of mathematics doing the work for us. An Apollo Node validates the Bundle (16,384 Assets and Events) which totals up 819,200 sensor readings in a block in the blockchain which itself contains 50 Bundles. This results in the throughput increasing through the Ambrosus network to over 10,000 times the amount before.
It’s a mathematical equation
It really is a matter of managing the numbers. This enables us to achieve capability on a far larger scale. The current maximum capacity of AMB-NET is therefore equivalent to processing 10 Bundles per second, bearing in mind there are 16,384 Assets and Events in each Bundle. This, in turn, equates to 600 Bundles each minute and 36,000 an hour. The result is further scaled to 864,000 Bundles each day. This works out that Ambrosus Network has the capacity to process around 14.15 billion Assets and Events in each day.
What does this mean in terms of scalability?
The AMB-NET scenario certainly begins to paint a picture where scalability can be conceived as being considerably secure, without compromising efficiency overall. At its cores is a consensus algorithm which is able to swiftly perform validation of each transaction as it is made. For Ambrosus, this is done through the Apollo Nodes, and effectively reduces the number of transactions taking place from moment to moment.
This, in theory, is scalable to enable an increasing number of consumer products to be verified, and therefore monitored. This would enable the scalability we are after.