Democratic Access Control¶
In 2021 we applied for additional funding, because from our NGIZeroSearch project we saw the need to extend our solution in a couple of ways. There were two main findings from our setup, and within this chapter we would like to introduce one of the open questions: How can we approve search entries from remote peers?
That sounds simple, but in fact is very difficult. Just imagine that there are millions of search entries, and we only want to add search entries that have been approved by a set of ‘search engine optimizer’ entities (SEO). In addition there could be more than one SEO, which also raises the question which entity assigns these SEO entities their rights. As a matter of fact: Simple digital signatures and hierarchical PKI structures won’t be of any help for us.
The following work on this page will be part of our funding granted by NGI Assure. We are very happy and pleased that we have been selected with our proposal.
Let’s start with a more trivial use case that we would like to enable with our project, and which shows the needed interaction between two (or three) parties.
I Am Who I Am - Really?¶
In our first use-case a worker would like to use an internal desktop system to start working on the tasks ahead. Unfortunately, until now, that means he has to get a local account on this specific system, which automatically leads to a local password. In reality however, the worker already has an identity. He only needs to be enabled to act in his new role as a worker. So how can he prove to the internal system that he (with his digital identity) is sitting in front of the desktop?
Or in other words, how can two entities get an approval about each other, without sharing passwords.
The desktop system could displays its own digital identity as an qrcode on its screen. Any user, who would like to use or operate on this pc, can use his own digital identity on his smartphone. He thus simply takes a photo of the qrcode, and approves with his own identity that he would like to “use” the desktop. We do not want to add a signature to the digital identity of the desktop! We just need to create the digital proof in terms of a content based signature or zero knowledge proof (zkproof). The user adds the timestamp and a signature, and stores both in his time stamped attestation (TSA) based protocol. By publishing this TSA to the company the user identification can be carried out before the user actually logs into the computer. How can the user unlock the desktop? E.g. by supplying the zkproof attribute (which could be based on random data) to the desktop as the password. Any server within the company can then check the zkproof added to the desktop identity with the published TSA entry and “knows” which user is currently operating on the desktop system.
We could extend the example above with more details on the the user management system of the company. But let’s return to our initial search-entry use-case, and add more components that we could need.
Adding Distributed Search Entries¶
In our NGIZeroDiscovery project we build up the capabilities to store search records in our identity hash table (IHT). Nevertheless, if everybody could add search entries, the database would be full soon, and there could be lot of malicious content floating around. Although we only store privacy preserving record linkage (PPRL) and the public access token in our search entries, the potential for misuse is already high enough. Currently there would be no governance structure, that on one side helps people to be found by queries, and on the other hand allows to augment and moderate the search entries?
We have to divide this use-case into several parts and look at the responsibilities of each participant. On one end we would like to enable an organization (the SEO Approver) to assign SEO entities the power to approve search entries. All this entity has to do is to add the content based signature of each SEO entity it assigns to his TSA based protocol. This information can then be forwarded to the search nodes which actually store the search entries, because they need to know which SEO entities they should have trust in.
There can be many different forms of SEO entities: some could be looking at the the search entry from the perspective of sustainability, another one from the perspective of law. In this way many different aspects can be handled by specialized SEO with their expertise. Their role and task is to approve search entries (of companies) as valid if they match their criteria and if the content description is in a good shape to be found (“sanitizing” search entry with respect to their expertise). Each SEO could set up his own search space, or they could work in a shared search space/domain that allows them to host a bigger dataset that each one alone. If a company requests to be verified by a SEO, the SEO can check e.g. the webpage, and creates his necessary TSA protocol entry that matches the digital identity of the company / webpage. The SEO does not want to add the digital watermark of each single webpage to his TSA at this step, because then we would need to distribute this to all search nodes.
The company can then add the digital content watermark and record it in its own TSA. Since his own digital identity (a specialized search identity representing the company) was previously approved by the SEO entity, the link between the final record up to the SEO approver is complete, and can be always verified. The company can then, after possible modification of it’s webpages according to the SEO, publish his own search entries. Each search index node can check whether the companies identity has been approved by an SEO, but it doesn’t need to check each record individually.
The added benefit for a user, who is searching for content, in this kind of search setup is: he can select a set of SEO entities, that he would like to trust. All search entries returned to him that do not match his selection will be filtered out. With this setup we prevent the need to check each webpage, but enable a market of SEO provider that can compete on different aspects and expertise.
However, the picture is still not complete. Let’s have a look at our third and last use case before moving into abstract definitions.
Adding Distributed Intrusion Detection¶
The third use case that we would like to realize is the implementation of a remote intrusion detection system. Each system is able to record its own state / system configuration, and it can do so periodically. Each system is also able to send it’s attestation result to a different peer (i.e. to the system administrator) to compare the results with the desired state of the system and to approve conformity.
When a third party steps by and would like to use this system, it is now possible to inspect different attestation results: The one from the machine, the one from the administrator, and it would even be possible to compare the result with a desired state that he expects the system to be in.
From this intrusion detection use case we can see, that there is one missing role that we have to add to achieve the full potential of our NGI Assure project.
What’s next ?¶
We would like to review and reuse what is there, but extend it with the requirements that we have defined and described in the use cases. Would you like to join our efforts? Hop over to https://www.gitlab.com/pi-lar/neuropil-ldtsa and share your point of view. Any feedback, question or hint can make the difference. We are aiming to build an RfC that can be implemented by others as well, but it will for sure be an integral part of our neuropil cybersecurity mesh!