A real answer to "W...
 
Notifications
Clear all

A real answer to "What does decentralization require?"

29 Posts
4 Users
24 Likes
2,278 Views
DoctorAjayKumar
Posts: 10
Topic starter
(@doctorajaykumar)
Active Member
Joined: 4 years ago

Introduction: the situation we're in

Hi, everyone

I posted an older version of this in a comment somewhere, but I'm reposting it as a top-level thread for visibility.

Larry posted on his blog an article called What Decentralization Requires, and asked people to comment with agree or disagree.

At the time, I couldn't produce a good linguistic approximation of my thoughts on the matter.

By analogy, I came up with the following: that article is proposing a set of traffic laws for flying cars. It's not possible for me to meaningfully agree or disagree with it, because I don't know what constraints flying car technology imposes, because flying car technology doesn't exist yet.

Postulating about high-level rules for what a decentralized internet might look like is not a useful exercise until we know what constraints the low-level technology imposes. That insight cannot come until the technology exists and is widely used.

The basic technology required to create a truly decentralized internet does not currently exist.

In particular, creating decentralized replacements for social networks and video hosting sites does NOT amount to simply writing browser extensions, WordPress plugins, or web apps.

The ideas that I'm seeing proposed on this forum are akin to suggesting that we build flying cars by attaching eagles to horse-drawn carriages.

As an analogy: before you write a web app, you need HTTP to exist. Before HTTP, you need TCP. The decentralized web is at the stage where it doesn't have TCP yet.

Let me now pivot to an anecdote, and I will circle back and explain its relevance.

Anecdote: Parler

Parler has been canceled. It's all over the news, but if you aren't familiar with the story, here are two articles:

  1. https://archive.vn/33i7a
  2. https://archive.vn/4jaBd

Additionally, there are unconfirmed rumors that

  1. Parler suffered a database breach that has led to leakage of private user data such as cell phone numbers and scans of photo IDs

  2. Parler failed to scrub metadata from files uploaded to the website (such as geotagging in photos), resulting in leakage of identifying information such as GPS coordinates where photos were taken.

I haven't seen any concrete evidence of the latter two claims, just reports.

According to those articles, Parler is built on WordPress. WordPress is notoriously insecure. Supposing that it is true that Parler is built on WordPress, leakage of user data shouldn't surprise anyone. For instance, the Panama Papers leak was in part made possible by poorly-written WordPress plugins. Moreover, Parler chose to build their house on enemy turf (i.e. AWS), with predictable results.

Bottom line: the reason Parler is in the situation they are in is because they made every bad design decision one could possibly make.

Circling back

Parler teaches us an important point: we cannot skip over getting the low-level technical details right. Duct taping existing garbage technology together is not an adequate vision for a decentralized internet.

There is no off-the-shelf system with all of the necessary properties. There is no getting around the fact that getting the infrastructure right and properly implemented is a necessary precondition to application development. Look no further than the Parler debacle.

Like I said, the ideas that I'm seeing proposed on this forum are akin to suggesting that we build flying cars by attaching eagles to horse-drawn carriages.

A solution

For the last few months, Craig Everett (@zxq9) and I have been discussing creating some technological replacement to solve the cancelation problem. The time has clearly come for us to do that.

Our project is called The Orange Pill. The first component is the low-level data management software, which is called the Orange Pill Storage System, or OPSS.

We have a GitLab repository right now, which also includes a draft of a manifesto.

That manifesto will eventually answer obvious questions such as

  • "Why are you creating a new thing?"
  • "Why don't you just use $ExistingThing?"
  • "Why does OPSS solve $Problem and $ExistingThing doesn't?"
  • "What exactly does OPSS do, and what does it not do?"
  • "How does OPSS work?"
  • "How can I help?"
  • "How can I donate?"

Craig is an experienced distributed systems engineer, and I am a mathematician.

Craig is working right now on a proof of concept, as well as what amounts to a whitepaper. I am working on getting a funding infrastructure set up, so that Craig and I can afford to work on this. We anticipate 3-12 months of full time work is required to get to version 1.

If someone has some experience setting up funding infrastructure for open-source projects, I could really use some pointers. I am going off of what I find on Google, and God only knows if any of it is correct.

You can contact me on Twitter (@DoctorAjayKumar), or via email (same username at protonmail)

Craig is @zxq9 here and most places, @zxq9_notits on Twitter. His website is http://zxq9.com/ . His email is same username at zxq9.com

28 Replies
zxq9
Posts: 23
 zxq9
(@zxq9)
Eminent Member
Joined: 4 years ago

I can answer any questions of a technical nature anyone might have regarding OPSS. The most important thing to remember about it is that OPSS represents the low-level storage and address -> resource resolution, and defines categories of data.

This is infrastructure

To make the system understandable to web developers it provides a set of RESTful verbs for resource retrieval, query and data submission, and those combine with data categories  to determine how the system should handle each request (private data, for example, cannot be distributed in the same way as public data, and dynamic data like a chat session must be handled very differently as well). It also provides a system for an abstraction of socket connections between nodes based on datagrams of arbitrary size (a rough analog to websockets).

These features are the minimal complexity required to create a platform upon which one can write applications as opposed to merely storing and broadcasting data in a distributed way. There is no system that provides a hybrid approach like this, further the current "NoCode" nonsense that cloud providers have fostered in the web world is mentally shackling people from understanding what is involved in creating a solution because the majority have never dealt with the underlying network or even storage concepts needed in the manner necessary to create a new infrastructure layer.

Mindset

If we want a flippant mantra like the meaningless garbage phrases "NoCode" (an utterly stupid idea) or "NoSQL" (a complete red herring -- which query language is being used is immaterial to the fundamentals of the system being talked to) we could create an actually meaningful one: "NoDep". When writing a core infrastructure component that solves a problem nobody has solved before there are no dependencies one can rely on. The infrastructure is the ground-floor layer and trying to build on top of web garbage will result in a garbage outcome. It is the problem of "the tools I need to write my tools don't exist yet".

The Happy Part

The light at the end of the tunnel is that this infrastructure layer does not have to be particularly large or all-encompassing (actually, it must remain small to be useful and allow flexible systems to be built on top), can be incrementally improved after an "80% solution" v0.X is deployed, and is easy to build a web-like component that developers could write in-browser apps for. That is, it is easy to write a webserver that serves localhost and abstracts the network through the OPSS daemon -- web apps could be written that talk to it, and browser plugins could be written that can change the URI scheme in-browser to implement an alternate URI scheme where talking to the OPSS local webserver is opss://[resource addresses] and that can exist right alongside http:// and https://.

The frustrating thing is watching astronautics and bikeshedding occur when I'm really just eager (and already ready) to get directly to work on this. But then again, that's the typical situation in tech: "Worse is Better" so it goes.

Reply
Posts: 33
(@whatusername)
Eminent Member
Joined: 4 years ago

As I have written in other post, it seems we need three major layers: privacy, storage and spam/DDoS protection. It looks like your project aims to solve the second one.

How do you think we can integrate it with the other layers? Can we build on already existing components like Tor, I2P, Lokinet?

And if we use the crypto for spam/DDoS protection, it's going to be slow and expensive. Do we have other options? Maybe there are more complex market mechanisms to solve these problems. Lokinet, for example, uses market mechanisms to solve some Tor problems, and Freenet implements some kind of web of trust to fight spam.

Reply
4 Replies
DoctorAjayKumar
(@doctorajaykumar)
Joined: 4 years ago

Active Member
Posts: 10

@whatusername This question is better answered by @zxq9, but I will give it a shot

OPSS sort of addresses the privacy problem, because it allows for a VPN-like structure. OPSS at the end of the day is more or less just an addressing system plus some software/protocols for moving bits around. It's possible to set up a structure that hides the true address that some data originates from, in much the same way that a VPN works.

OPSS is also antifragile to DDOS attacks, because the way it delivers bits from point A to point B is through a system similar to torrenting. Trying to access data increases the number of caches that have a copy of that data. So a DDOS-style attack has the opposite impact: it makes the content more available.

Reply
(@whatusername)
Joined: 4 years ago

Eminent Member
Posts: 33

For example, if there's no cost for creating identities and it's the only subject of moderation facilities, we can just keep making them infinitely and spam the network.

Reply
zxq9
 zxq9
(@zxq9)
Joined: 4 years ago

Eminent Member
Posts: 23

@whatusername What would you be spamming to, though? Your own node unless someone else has granted open access to communicate with their node. This is an application level issue, not an issue that the infrastructure layer deals with (in the same way that access control and concepts like moderation are application level issues for, say, Twitter, but have nothing to do with HTTP or TCP itself).

Reply
zxq9
 zxq9
(@zxq9)
Joined: 4 years ago

Eminent Member
Posts: 23

@whatusername There are indeed layers to the problem. The first is basically data handling: data storage, transmission, distribution, addressing, access control, categorization (into public-distributed, private-distributed and single-copy). That is what OPSS solves. Without any concept of access control it is not possible to write applications though it is possible to simply store data (and in the abstract, stored data could be programs, but that's not quite what we're going for here).

The second layer (and further) layers are in the realm of the application developer who wants to base their application on OPSS. Note, OPSS is not a Twitter or YouTube replacement -- it is the basis for building things like Twitter and YouTube (and anything else you might dream up).

Building on top of other systems won't work unless we want another, much slower, data store to augment the one that OPSS would already be providing across its swarm of nodes. That is to say, there is no reason that public data couldn't also be added to a bittorrent daemon or a freenet daemon running alongside OPSS, but both systems are incapable of ordered retrieval (for files intended to run as streams like podcast recordings), on-demand amplification (livestreams or suddenly-popular videos), and none of the existing systems has a way to actually control dynamic data other than to let it expire in the cache by virtue of becoming unpopular.

Again, these are keyed storage systems. Very interesting and novel ones, but keyed storage nonetheless. That isn't enough to write an application on top of. OPSS provides for a hybrid approach that combines the merits of these systems, plus a method of ordered retrieval in addition to a canonical origin for data (at the expense of address obfuscation, which is a major selling point of things like Tor and Freenet, but precludes any possibility of creating applications that run with reasonable latency), in addition to making the origin node owner of a given piece of data the physical holder of the canonical (reference) copy of that data with both full rights to it as well as the sole authority over whether to pull it from its origin location or not (public data may still exist in the public cache for a while, but would eventually expire unless very popular).

Crypto cannot help with DDoS, only distribution can. Any single node of OPSS could be DDoS'd in the traditional manner, but this would only affect key retrieval for private data and accessing the live state of single-copy data, but would not be able to affect distributed address resolution or interfere with data already in the public cache. Note, DDoSing a single node would amount to knocking a single user offline. Imagine if "john's Twitter feed is down" instead of "twitter is down". That's a very different thing, and because the node owner still has local access to his node it is possible for him to set up another endpoint or copy the data and its control to another node if the issue at hand is really important.

Reply
Posts: 33
(@whatusername)
Eminent Member
Joined: 4 years ago

How controlling your data or moderation, for example, will be implemented. Is there a method to actually remove some piece of data you control or we just send some kind of flag 'removed', 'edited', but all the data will be kept there forever? Is this going to violate the data ownership principle which Larry Sanger has described?

Reply
1 Reply
DoctorAjayKumar
(@doctorajaykumar)
Joined: 4 years ago

Active Member
Posts: 10

@whatusername

The data ownership principle is met by you owning your own data. If you put your data on someone else's server, then you don't own it anymore. That's the problem with cloud computing and existing social networks.

None of the existing technologies really allow you to share data without surrendering ownership of it. That's why we're writing a new thing.

Say you want to replicate the functionality of YouTube

- if you make a video public, you don't own it anymore, save for whatever influence copyright laws might have. You can't make data public and then expect to maintain control over it. That's not how anything works. Additionally, you shouldn't expect to give your private data to someone and expect to maintain control over it.

- However. What we can do is give you a way to let's say share a video with your friends, but not to the public. What you need is some access control (which is enforced via cryptography). And then you need a way for your friends to stream the video without you uploading it to some central server.

OPSS solves the low-level data problem there: access control and stream-torrenting. No existing system can solve that problem.

All of the various social networking concepts have the same low-level data management problem I just described. OPSS is meant to be used as a backbone for people to write those types of applications on top of.

There's some issues with making that video streaming application user-friendly, and packaging that as something your grandma can use. OPSS solves the low-level problem of basically letting people Torrent-stream some video, and having access control on it.

Reply
Posts: 33
(@whatusername)
Eminent Member
Joined: 4 years ago

Let's say I'm an average Joe, I'm drunk and have posted some nasty things about my family and friends on Reddit. Next morning I wake up and I see it's been getting more and more upvotes, I just press the delete button and it's gone.

Yes, there can be screenshots, there can be archive pages, but I've prevented it from gaining more and more popularity, I've stopped the main stream of attention and I've broken its link to my Reddit identity. Now it's someone else problem to publish it, seek public attention and prove that I really wrote it and it's not fake with the copy of my post.

So theoretically you're correct, but practically people would choose something like Reddit over some permanent database that stores everything they ever wrote on the Internet forever.

Reply
1 Reply
DoctorAjayKumar
(@doctorajaykumar)
Joined: 4 years ago

Active Member
Posts: 10

@whatusername

OPSS doesn't have a permanent database that stores everything forever. That's a blockchain. OPSS isn't a blockchain-based system.

The problem you're describing is something that each application would address individually. So I can think of application designs that allow you to delete posts like that, and some that don't. OPSS doesn't really address that.

It might be possible to implement something like Reddit in a distributed manner, where everyone has their own data on their machine. Perhaps there's some central datastructure which contains a tree of pointers to different posts that people have made (which is what you would need for a comment tree). That doesn't seem like it would work, but maybe.

That's an interesting problem, but that specific problem is an application level problem. At that point it's a tradeoff between latency and privacy.

@zxq9 just said something interestiing to me: the point of OPSS is to make these tradeoffs visible and allow an application developer to cleanly choose between them. As it stands, most people are completely unaware that these tradeoffs have to be made.

Maybe a Reddit-like thing is a problem you could solve.

Reply
Posts: 33
(@whatusername)
Eminent Member
Joined: 4 years ago

Okay, maybe I just don't understand the architecture. In my An Architecture Outline I'm thinking about the system like a block box where application developers can build things like forums, Reddit or Twitter without thinking about all the privacy and spam problems, individual nodes, caching, etc.

So the API for creating something like a forum would looks like this: generate a storage space; make this identity the owner of this space; allow all identities to read the space; allow the list of identities to write to the space. Something like this.

Can you provide a similar example how an application developer can create a forum or Twitter based on your system?

Reply
12 Replies
DoctorAjayKumar
(@doctorajaykumar)
Joined: 4 years ago

Active Member
Posts: 10

@whatusername

You should not think of anything as a black box that allows you to forget about fundamental tradeoffs in your application. However, one of the goals of OPSS is to make these tradeoffs plainly visible, and cleanly allow you to choose between them.

Important point: as far as technology is concerned, putting data on someone else's computer means surrendering ownership of the data. We're not implementing DRM.

So Twitter. You want to make that distributed. There's tradeoffs, and a lot of ways to approach that, but here's one approach.

All of your tweets are stored on your computer. If someone wants to read your tweets, they send a request that ultimately ends up at your computer. There's a caching system implemented in order to solve the Slashdot effect (which today we would call "going viral"). There's a centralized system somewhere that stores trees of pointers to comments (or maybe that's stored in some blockchain structure, IDK), and so that's how commenting would work.

That would be taking the extreme privacy end of the privacy/latency tradeoff.

You could also just have something similar to the way Twitter works now, where you surrender ownership of your tweets, and maybe you can delete them, maybe you can't. All the tweets are stored in some centralized database. OPSS is used to handle resource allocation. So when NewTwitter is receiving a lot of traffic, they can rent processing time and storage space from other nodes, somewhat like a VPS. There's trust issues there that need to be solved on a case-by-case basis. But they're no more daunting than the trust issues that currently exist with something like AWS.

There's a lot of ways to skin a cat. We're selling knives.

Reply
(@whatusername)
Joined: 4 years ago

Eminent Member
Posts: 33

@doctorajaykumar Well, as I said I expect the system I described to be slow and expensive. And I view it more abstractly where your data is not stored on individual computers but on an abstract network where some integrity and verification mechanisms can be implemented. Of course, are these mechanisms possible at all is another question. For example, if we send a request to delete a piece of data and the majority of the network nodes run unmodified software we can be sure to a certain degree it will be actually deleted. Of course, some modified clients can still store it, but as I said it's someone else problem to publish it, seek attention and prove things.

 

Reply
DoctorAjayKumar
(@doctorajaykumar)
Joined: 4 years ago

Active Member
Posts: 10

@whatusername

You're describing some hybrid between a blockchain and a web of trust. That's definitely not what OPSS is. There's no web of trust, and no blockchain.

The bits have to live somewhere. Good quote I heard: "There's no cloud. It's just someone else's computer."

You're almost describing a git repository where you can also edit the entire file history, according to some sort of consensus algorithm. Almost distributed DRM. That isn't what OPSS is.

I'm not interested in developing a system where people can delete data on someone else's computer. More importantly, a distributed system can't rely on trusting everyone else to do the right thing.

If you like your data, you can keep your data. But if you put your data on someone else's computer, it's no longer your data.

Even the access control system we're implementing, it doesn't allow you to delete things from someone else's system. It just allows you to control who you share it with.

Reply
(@whatusername)
Joined: 4 years ago

Eminent Member
Posts: 33

@doctorajaykumar

I divide the nodes into clients and servers.

The server nodes are supposed to store and distribute the content with some level of integrity and verification. They are expected to delete data on request by the owner with some degree of certainty.

These mechanisms doesn't apply to the clients and if some clients have already received the data, it's up to an application to decide what to do if the request is received.

The point is to stop further distribution of the data and cease the ownership confirmation by the network.

Reply
zxq9
 zxq9
(@zxq9)
Joined: 4 years ago

Eminent Member
Posts: 23

@whatusername What makes you trust the "server" nodes?

Reply
(@whatusername)
Joined: 4 years ago

Eminent Member
Posts: 33

@zxq9

There can be redundancy, check sums, signatures, verification mechanisms by nodes to check each other, some kind of web trust for server nodes.

A Tor exit node, for example, can modify plain traffic, so there are verification mechanisms to check the integrity of the traffic from time to time and exclude malicious nodes from the network.

Lokinet uses market mechanisms to protect itself from a Sybil attack.

There can more complex mechanisms by which nodes check and estimate each other and apply some kind of sanctions and restrictions.

In the end we can have multiple webs of trust for server nodes and a client can choose which one to trust.

 

Reply
zxq9
 zxq9
(@zxq9)
Joined: 4 years ago

Eminent Member
Posts: 23

@whatusername Those mechanisms check only the integrity of the data. A hash achieves that: make the address of a resource its hash and it's done. A signature can prove provenance, but once again this is only a check on the data which is only half of the equation. Cryptographic methods can do nothing to ensure behavior.

What is ensuring that the behavior of the nodes holding the data itself are behaving in a trustworthy way? With Tor, for example, you can't be sure whether or not a malicious node is reporting on you and not actually re-routing your requests (this type of fingerprinting attack is actually a significant problem with Tor compromises).

You listed a wide range of things that nodes must do: provide for privacy, delete data when the owner of it decides to, honor a user's right to export their data, only share data with people who are supposed to be allowed to see it, etc. These are behaviors, not data integrity issues, and there is nothing you can do to guarantee they will do any of those things.

That is why OPSS defines categories of data, specifies different distribution mechanisms for each category, and documents the tradeoffs associated with each for developers. A system that makes promises about the handling of data can never abandon the principle that the originating node (which in the base case is owned by the person or organization that owns the data) is the canonical source of and authority for a given article of data.

Reply
(@whatusername)
Joined: 4 years ago

Eminent Member
Posts: 33

@zxq9

There is also redundancy of behavior.

A file is not stored on a single node, it's split in chunks and distributed among many nodes with some level of redundancy. When the file is assembled again, checksums of the chunks are calculated by multiple nodes and if some "bad" node provides wrong data, it can be excluded from further operations.

The same way "good" nodes that have deleted the file on request can try to retrieve it from other nodes to check their behavior.

If we have many "bad" nodes, they can try to deceive the system and rate themselves as good and others as bad. This is how nodes will naturally form alliances or webs of trust. And since there's no central authority, each client chooses which one to use.

 

Reply
zxq9
 zxq9
(@zxq9)
Joined: 4 years ago

Eminent Member
Posts: 23

@whatusername You are again discussing mere data verification. That is already a solved problem.

I'm talking about validation of behavior because applications are behaviors over data and not just the data themselves.

You cannot write an actual application on top of a purely distributed data layer alone and provide any guarantees about behavior (or even provenance). Think through it. The fundamental constraints of and tradeoffs available to distributed systems are well understood. The design of OPSS provides every manner of tradeoff available within a single system and puts them all in the application developer's toolbox so he can choose how the application he wants to write will handle a given case and makes obvious what those choices entail.

Reply
(@whatusername)
Joined: 4 years ago

Eminent Member
Posts: 33

@zxq9 What would be a problem with deleting, for example? We cannot guarantee that a node has actually deleted the chunks, but we can check if it still provides them on request. If so, we exclude it from our web of trust.

Reply
zxq9
 zxq9
(@zxq9)
Joined: 4 years ago

Eminent Member
Posts: 23

@whatusername Any node in a web of trust can selectively lie.

Reply
(@whatusername)
Joined: 4 years ago

Eminent Member
Posts: 33

@zxq9

This is where game theory takes place. We've already increased the level of certainty with a simple mechanism and bad nodes need to be more careful, but good nodes can also change tactics to catch them. The higher the cost of being caught, the less profitable the cheating.

So, yes, there's always only some degree of certainty and each client decides how much risk he can take.

Reply
atmchuck
Posts: 8
(@atmchuck)
Active Member
Joined: 4 years ago

@zxq9 and @doctorajaykumar, I've read though most of the posts in this thread. And, TBH, I'm still re-reading some to be sure I understand things. That said, I cheated a little and searched for IPFS to see if it was mentioned yet, and see no references. I'm wondering how OPSS compares to IPFS (or, is it even apples to apples?). I'll look at the GitLab link you posted next. I'm genuinely interested in this topic in general, and specifically about what you are suggesting. Thanks for your contributions thus far.

Reply
2 Replies
zxq9
 zxq9
(@zxq9)
Joined: 4 years ago

Eminent Member
Posts: 23

@atmchuck Hello!

IPFS and OPSS is sort of an apples-to-mammals comparison. IPFS is a distributed file system and for what it is seems to have a nice design. A distributed file system is not sufficient as a basis for actually writing a distributed application, as it lacks access control, lacks an execution environment, lacks security guarantees, cannot be used for streaming content, etc.

Saying IPFS is a platform for development of distributed applications is like saying a hard drive is a platform for development of local applications. A whole constellation of services, application-level constructs (like the concept of a "user"), and interfaces to those services must exist before an end-user application can be developed without requiring every application to re-invent the world every time an author has an idea.

Unfortunately this is the situation with every proposal I've seen so far, and it is pretty clear that there are just not very many people who have worked on infrastructure projects and understand networking well enough to grok what is required to develop a completely new infrastructure concept, much less implement one.

Last week I started working on some very early code for an interconnector component for OPSS called COON, and will be making OPSS and COON handshake and create interconnection networks of peers over the next few days as time permits. Unfortunately it looks like this is all just going to be unfunded work so the going will be extremely slow (and probably face periods of complete stall because I have kids and feeding them takes priority), but whatever. It is better to incrementally write a system that has the necessary characteristics than wax philosophic about systems that are in beta now but lack the necessary features to actually do anything novel and engage in architectural astronautics about things nobody is ever going to actually write.

Reply
zxq9
 zxq9
(@zxq9)
Joined: 4 years ago

Eminent Member
Posts: 23

An addendum:

Kumar and I spent a bit of time writing out a (generally ignored) post where we responded point-by-point to Larry Sanger's original essay about what decentralization required. Quite a few areas you might be interested in are touched on in it, including why systems like IPFS and Freenet do not provide features sufficient to act as an applications development platform:

Link to our response: The Orange Pill Storage System: A point-by-point and exhaustively detailed answer to "What Decentralization Requires"

Reply
atmchuck
Posts: 8
(@atmchuck)
Active Member
Joined: 4 years ago

I'm still working on understanding (at least at a high level) everything you are saying in this post. Does what you are planning on address the problem that something like this addresses?

https://github.com/samyk/pwnat

Reply
1 Reply
zxq9
 zxq9
(@zxq9)
Joined: 4 years ago

Eminent Member
Posts: 23

@atmchuck Hole punching (or more generally, NAT traversal) is indeed one of about a dozen problems that have to be solved. Most techniques (as the documentation in that project explains) rely on a 3rd party and for the naive hole punching routine OPSS employs COON (the network monitor and interconnect component) plays the role of the 3rd party. Naive UDP hole punching is far faster and more reliable than using traversal techniques that depend on NAT behaving in a predictable way which is why for now that is what we are going with for the v0.1 version of OPSS, but the plan is to add several more NAT traversal techniques in the future to employ as the situation dictates.

Reply
Share: