In collaboration with our members, we are creating a Trust Framework that enables easier, more secure, trusted data sharing in the pursuit of net zero. By enabling greater discoverability and interoperability of data while reducing friction associated with sharing data, the Trust Framework will accelerate innovation based on currently under-utilised data.
The Trust Framework is built on Five core principles:
- Cohesive — common rules across markets
- Interoperable — common processes, frameworks, connections
- Legal — common frameworks for data rights, liability, redress
- Controlled — common, rights-based consent management for access to data
- Universal — open to the whole market
Why is a Trust Framework for data sharing needed?
Huge quantities of data are being generated from our energy systems, financial systems, our built world and our environment. To fully exploit its value, we need to connect data to those who need it. We want to reduce the friction of finding, accessing, and using both commercial and non-commercial data to enable better decision making to achieve net zero.
This isn’t a problem that needs new technology. Many attempts to consolidate data—new databases and portals—struggle to scale. Our economic and infrastructure systems are being digitalized in a decentralised and distributed way. There is no ‘centre’ in a system like this: we need to connect data, not collect it. Trust Frameworks enable this by setting the rules of the road and addressing risks and concerns that prevent the sharing and using of data.
According to a UN and World Bank report, investment in scalable, interoperable data ecosystems yields a 32x return on investment. Further, the lack of trusted data flow is leading to poor decisions that make it more risky and difficult to quantify and invest in the transition to net zero.
What does it do?
The Trust Framework is a very ‘thin’ layer. It assures that (a) organisations are who they say they are; (b) consent is given to share data with the pre-agreed rules; and (c) enables that consent to be linked to rules for licensing, liability transfer, legal and operational processes (e.g. open standards for data, APIs, etc).
To enable pre-authorised access to data, the Trust Framework will include verification and assurance services for organisations who wish to share, access and use data. Tiers for verification and assurance include verification and assurance at both organisational and dataset levels:
- Organisational checks: for example, confirming the organisation is a legal company entity with an Icebreaker One Membership Agreement; is signed up to the Information Commissioners Office. Higher levels will include KYC checks.
- Organisational policy alignment and/or compliance with policies and standards: for example, alignment with regulatory guidance such as Open Data Best Practices; Published data strategy; Published Net Zero related reports (e.g. TCFD, PCAF)
- Dataset alignment and/or compliance: for example, license checks for Open Data licenses, machine-readable meta-data; usage of Open Data Certificates; alignment with Data Sensitivity Classes; compliance with IB1 Trust Framework License Agreements.
What sectors need Trust Frameworks?
The principle of a Trust Framework is sector-agnostic. Trust Frameworks can be implemented for any sector and even to cross sectors (for example, developing hydropower projects depends on collaboration, and data sharing, between the energy and water sectors). However, as each sector has its own specific needs, sector-specific Trust Frameworks will be needed to agree upon and operationalise sector-specific rules depending on commercial, regulatory, and technical needs. We are developing the first Trust Framework for the energy sector – Open Energy – and are keen to work with partners to do the same in other areas of the economy.
How can we contribute to, and benefit from, Trust Frameworks?
To contribute to the development of the fundamental Trust Framework principle, or to the implementation of sector-specific Trust Framework implementations, become an Icebreaker One Member.
Priorities
Our three priorities to deliver a cohesive and interoperable data infrastructure:
1. Design for search — the foundation for discovery and access
Data must be usable by machines, not just humans. Policies must mandate that data be machine-readable in order that it may be collected and used in an efficient manner. As important is the ability to discover that the data exists, what it is, where it is from, and how it may be used. This ‘metadata’ is a priority to make available so that data may be found and information about it accessed. Policies must mandate the production of meta-data that will aid discovery.
This first priority is independent of the specifics of any taxonomy, ontology or other structural design. Such designs are numerous and domain-specific.
2. Address data licensing policies — the foundation of access and usage
Licensing can determine how data may be used. To unlock the value of Priority 1, policies must mandate the publishing of licenses (or usage rules) as metadata under an open license. This is essential to enable large-scale, many-to-many discovery that data exists in a usable form.
Policies should mandate the publishing of any non-sensitive data under an Open license (this mirrors the open-by-default policies of many countries). Policies should mandate the publishing of sensitive data using a Trust Framework.
3. Address data governance — the foundation of open markets
Data increases in value the more it is connected. A focus on systemic cohesion and interoperability reduces the burden of sharing by creating common rules and frameworks for sharing that address good data governance. It ensures data is used appropriately for the purposes intended, addressing questions of security, liability and redress. Organisations must address their own data governance and should aim toward common rules and processes to increase adoption, reduce costs and simplify interactions between data suppliers and data users.