An Architecture Out...
 
Notifications
Clear all

An Architecture Outline

16 Posts
2 Users
2 Likes
1,391 Views
Posts: 33
Topic starter
(@whatusername)
Eminent Member
Joined: 3 years ago

This is just my brain dump. I'm going to update it frequently.

 

TCP/IP

In the beginning we can send data from one IP address to another.

 

Encryption

The problem is that the data can be intercepted by a third party. So we add a new layer where the traffic is encrypted.

 

Anonymous Routing

The next problem is that the IP address can be linked to a real person. To avoid this we add a new layer where the data don't go directly from one IP address to another but pass through intermediate computers, so the receiver doesn't know the real IP of the sender.

 

Obfuscation

Now if our new protocol can be recognized by packet analysis it can be blocked, so we need to make it indistinguishable from some popular protocols.

 

New Address System

At this level we need new address system for nodes instead of IP addresses.

 

File Transfer

Next we need a low level protocol to send a file from one node to another.

 

Server Nodes

Next we can divide all nodes participating in the network as clients and servers. Clients will be regular users, servers will handle the routing, store files and provide other low level services.

 

Cloud

Then we create a new abstraction layer where we unite all the individual server nodes into one cloud supercomputer with single storage. The stored files will be distributed among the server nodes with some level redundancy, so the cloud is going to be permanent even if individual nodes stop working.

 

Public-key cryptography

Now the clients of the network can send and store files using the Cloud. At this point we using the public-key cryptography to introduce the ownership of the files and end-to-end encryption facilities.

 

API

Now when all the low level instruction is ready we provide API for developers for building all kind of applications.

 

15 Replies
Posts: 33
Topic starter
(@whatusername)
Eminent Member
Joined: 3 years ago

Oops, it looks like you can only edit it once or there's some kind of timeout on editing.

Reply
Posts: 33
Topic starter
(@whatusername)
Eminent Member
Joined: 3 years ago

Obviously, we don't need to implement all this from scratch. We can build it from already existing components.

We can look at Tor, I2P, Freenet, BitTorrent, crypto projects like IPFS, etc for ideas and ready-made components.

Reply
Posts: 33
Topic starter
(@whatusername)
Eminent Member
Joined: 3 years ago

The question is can it be based on volunteers and donations for providing the computing power or we need some kind of cryptocurrency and market mechanisms. The first approach, used for example by Tor, is going to be completely free for everyone, but will lead to spam and DDoS problems. The second approach can protects us from that, but everybody needs to pay for it.

Maybe some kind of an intermediate way can be found, where donations go to some kind of fund from which simple operations for regular users are paid.

Reply
Posts: 33
Topic starter
(@whatusername)
Eminent Member
Joined: 3 years ago

Lokinet uses the crypto technologies to solve some Tor problems.

https://lokinet.org/

https://docs.loki.network/Advanced/SybilResistance/

 

Reply
Posts: 33
Topic starter
(@whatusername)
Eminent Member
Joined: 3 years ago

In a way we already have decentralization, it's the Internet itself. We already have all this facilities in one way or another implemented ad-hoc. What we need is a consolidation of low level developers to integrate all these components into a single system, application developers to create all kinds of applications and users to make a network effect. And this is where the main problem is. Basically we need to consolidate the whole Internet.

Reply
goldmund
Posts: 5
(@goldmund)
Active Member
Joined: 3 years ago

I've been thinking about this quite a bit lately; glad you started a discussion.Off the top of my head, I'd say some form of an Onion Routing protocol best addresses some of the concerns you've enumerated in your initial post. Lokinet seems promising, though I've not tried it yet.

There's also the Federated Web, and the general ethos by which it operates. I think the intersection here is the need for an accessible way to donate computing power; we cannot expect everyone interested in a free and open dialogue on the web to understand this stuff as much as you or I. I truly think this is the crux of the issue. 

You're on the right track in that we have these tools at our disposal; the internet is a massive, decentralized graph that could be interfaced with in such a way that grants laymen the ease-of-access previous efforts have failed to make available.

Reply
goldmund
Posts: 5
(@goldmund)
Active Member
Joined: 3 years ago

Ah, I see you already noted Lokinet. Seems I missed that the first go 'round.

Reply
Posts: 33
Topic starter
(@whatusername)
Eminent Member
Joined: 3 years ago

Spam and DDoS

Tor onions services are under constant DDoS attacks. DDoS mitigation techniques require constant change of tactics which cannot be permanently built into a decentralization system. Once one system of captcha is broken, for example, someone needs to design a new one from scratch.

Traditional methods limit the access to the system by requiring some personal information like a phone number.

So it seems we need some crypto technologies here. Ether a client needs to contribute some amount of computation (Proof of Work) or pay his coins. notabug.io uses Proof of Work for Reddit-like voting.

As I said maybe some kind of fund can be created from which simple operations for regular users can be paid. Lokinet's sybil resistance shows an interesting case how market mechanisms can solve technical problems.

Reply
Posts: 33
Topic starter
(@whatusername)
Eminent Member
Joined: 3 years ago

So, it looks like we need Tor, I2P or Lokinet for privacy, some cloud software for storage and some crypto for spam/DDoS protection.

Reply
Posts: 33
Topic starter
(@whatusername)
Eminent Member
Joined: 3 years ago
Reply
Posts: 33
Topic starter
(@whatusername)
Eminent Member
Joined: 3 years ago

Storing data

A file is not necessarily should be stored as a whole on a server node, nor it should be stored on all server nodes. It can be split in chunks and stored on some nodes with some degree of redundancy.

A file even can have some kind of importance level depending on which the level of redundancy is calculated. The more important the file, the more reliable its storage.

Some regular comments, for example, can be set to the low level of importance, some articles can have the high level.

If we choose to use some cryptocurrency, a cost of storing the file can be adjusted depending on its importance.

 

Removing data

A client can send a request to the network to remove some file he owns. The server nodes should stop distribution of the file and remove all its chunks.

Of course, a server node can run modified software and ignore the request. This is where some verification mechanisms can be implemented. The "good" nodes that comply with the request can start sending requests to get the file to other nodes to check if they removed it too. If not, the "bad" nodes can excluded from the network. This is just an outline and more complex mechanisms can be implemented.

These mechanisms are not applied to the clients. At the moment of removing some clients can have received the data and it's up to the client application to decide what to do on the removing request.

The point is to stop the further distribution of the removed file and cease the ownership confirmation by the network.

 

Reply
Posts: 33
Topic starter
(@whatusername)
Eminent Member
Joined: 3 years ago

Data structures

A file is a sequence of bytes with some unique name (path, ID, address, URL) by which it can be efficiently retrieved from the system.

An article, tweet, image or video can be stored in a file.

But how would more complex structures be implemented, let's say comments to a post?

As a client of the network I have created some identity. I can write a post and send it to the network. The network will store the post with some unique ID, by which it can be retrieved, and with some information confirming my ownership. The owner can send a request to delete the file.

I can easily refer to other files (posts, images, tweets) in my article by including their ID.

Now we can broadcast the ID of the article to other clients. A client writes a comment and sends it to the network by the same mechanism. It just sends a file and receive its unique ID.

Now we need a structure to link comments to the post. First of all we can derive the order of the comments by their timestamps. Then if we want a reply to other comment, we can just include its ID in our text and let the client application to decide how to represent it.

 

Space

This is where we need the concept of directory or space. Since it's not the low level directory of the file system but some structure in the cloud we may want a new name for it. (By the way, maybe we need a new name for a file too?)

Instead of sending the ID of the article, we send the ID of the space. The space can contain the article, comments and all other data for implementing different applications like forums, Twitter, Reddit, etc.

The space for an article will look like this:

Space ID
|-- Article
|-- File ID
|-- Comments
|-- File ID
|-- File ID
|-- ...
|-- Some other structures...

 

Access control
A space has an owner that also defines the access control. We want other identities to read the Article subspace and write to the Comments subspace. Of course, the comment files will retain the ownership of the creator.

 

Remember, this all is just my brain dump. I'm definitely going to update it a lot.

Reply
Posts: 33
Topic starter
(@whatusername)
Eminent Member
Joined: 3 years ago

I've just noticed it's becoming look like Plan 9.

Reply
Posts: 33
Topic starter
(@whatusername)
Eminent Member
Joined: 3 years ago

Trust

As we've mentioned a server node can run modified software and not necessarily must behave as we expect. It can ignore requests, it can broadcast false requests, it can modify data.

This is where we implement verification mechanisms. The server nodes will check and estimate each other, reveal "bad" nodes and apply sanctions and restrictions to them.

If there are many bad nodes, they can start estimate themselves as good and other good nodes as bad. This where nodes naturally form alliances, or webs of trust. And since there's no central authority it's up to each client to decide which web of trust to join.

Other mechanisms are possible too. Lokinet, for example, has implemented a market based Sybil resistance mechanism. I believe there are more methods in different crypto projects.

Reply
Posts: 33
Topic starter
(@whatusername)
Eminent Member
Joined: 3 years ago

Specialization

Since there are many tasks which a server node (service node?) needs to perform, a node can further specializes in doing only one of them. It can do routing traffic, storing data, estimating other nodes, catching bad nodes, etc.

 

Game Theory

Bad nodes can also specialize in ways of deceiving the system. They can choose to be more careful and use advanced methods. So the good nodes can change tactics to catch them. This is where game theory takes place. The higher the cost of being caught, the less profitable the cheating.

 

Certainty

Since there's no central authority, each client estimates his web of trust and calculates risks of using it. There never can be an absolute certainty, but with a reputable web of trust, we can achieve an acceptable level of it.

 

Reply
Share: