PSE Roadmap: 2025 and Beyond

Thanks! It sounds like a worth problem worth exploring. If you have any pointers, info or people to chat happy to. I guess want to understand the problem and state of solution space to see if some more people from PSE can start looking into this

I think this is somewhat echoing @matt 's response in more words, but I’d be interested in an exploration of the intrinsic contradictions between privacy being baked in more and leaving the ledger open. I am pro-privacy. At the same time, I think it would be naive to ignore that the currently open ledger has been a boon for tracking malicious actors. It’s good that we can identify certain addresses as DPRK, or even build up a taxonomy of how they transact onchain.

If privacy protocols get embedded further, are we giving tools to similar parties while still leaving the people we’d want to see protected likely defaulting to less private options? If privacy becomes either the default or only choice for users, are we going to backpedal in terms of how lawmakers look at Ethereum?

Another contradiction lies in a more mundane application. Airdrops have (at least historically) tried to target user behavior representing shared ideologies. For an example with less financially-motivated parameters, there’s Optimism’s first airdrop. To some degree, we want our behavior to be recorded, but only viewable to the right people. As an analogy, if you could build a private alternative to Spotify, but there couldn’t be a personalized recommendation algorithm as a result, could it ever attract the mainstream? Privacy Pools sought to answer this issue, but only for a specific application (mixers). Zcash has something similar, iirc, that allows someone to reveal details of txs they were involved in, but something like that would still require proactive transfer of information from the transacting parties. Would someone like Optimism be able to make a similar airdrop in a private world? If they could (by requesting information or similar), doesn’t that mean that the not right people could also get their hands on the same data trivially?

In other words, vive la confidentialitƩ, but would it mean a regression in some parts of UX and other aspects of adoption?

1 Like

Technically it’s possible

In the Zcash protocol there is a z_sendmany transaction that allows sending a one-to-many shielded transaction with up to many zaddr recipients (54 zaddr before Sapling upgrade and more after extrapolated by max tx size increased from 100k to 2000k bytes per tx)

Similar construction can be used to set-up a private one-to-many airdrop on L1 or L2s. For example, there’s a privacy pool on Optimism (disclosure: I used to be a contributor to it) that lets one distribute token to up to 127 private recipients in one transaction with a single zk proof (you can check out the flow here: Multitransfers | zkBob). The mechanics for recipient is that it works like a pull instead of push in regular transactions; so recipients need to claim tokens themselves by submitting a transfer or withdrawal from the pool via a transaction with a ZK proof. If you are curious how it works; you can play around with USDC or ETH pool on Optimism to get a feeling.

Recipients can reveal received amount for themselves, sender for everyone from one-to-many transfer but only in one hop. This technology used on Optimism is in production three years. Nowadays there are faster and more efficient proving systems (esp on L2s)

Hopefully more developers will create apps that include privacy as a feature even if the main focus of the app is something else (e.g. an airdrop tool?). TLDR; I am optimistic on it

If privacy becomes either the default or only choice for users, are we going to backpedal in terms of how lawmakers look at Ethereum?

from other L1s cases we see that shielded transfers of native can lead to more complicated user flows, e.g. with CEX and basically any custodial-based tools where AML is enforced and withdrawal from a pool gives 10/10 AML score (hypothetically) . E.g. Monero only has shielded transactions; Zcash supports four types of transactions (shielding, unshielding, shielded, transparent). From a hypothetical regulator perspective… the easiest to analyze and regulate approach seems to be avoiding unshielding transfers. Survival mode for a chain is to allow shielded transactions, and hardcore mode is to support only shielded transactions. So far Ethereum is playing is safe space and there are no concerns.

I think you may have misunderstood my intent about airdrops. (Or I misunderstood yours, definitely a strong contender.) I wasn’t expressing doubt about a potential airdropper having a means of distributing privately. I was wondering how they’d even know who to airdrop to.

OP Airdrop #1’s recipients were measured, among other things, by if they were voting in DAOs, members of multisigs, and/or Gitcoin donors. If I’m not mistaken, at least the first and last would be inaccessible information, and potentially the middle too. Even if users could selectively reveal information, how would they know who to reveal to? What stops {insert bad actor here} from claiming they’re making an airdrop list, and need data in order to know who to include?

More generalized, I assume analyzing on-chain behavior will only become more relevant in creating better UX for users, being able to recommend transactions, warn against others, etc. In a private world, does that mean that the wallet gets full visibility into the user’s txs? If so, who protects the user from whoever controls (or hacks) the wallet?

I see your second point about AML on the chain level, but what about the individual level? Does that mean that if someones wants to be able to use both CEXs and privacy that they’ll need to maintain separate addresses for each? How would they fund my private addresses? I guess they could send to fresh addresses once from a CEX and then mix out to a ā€œprivateā€ address, never using that particular address again (since it’s linked to them via the CEX but probably burnt wrt AML bc it used a mixer), and I guess privacy that necessitates jumping through a lot of hoops is better than none, but how many people will be able to actually harness it?

In current designs recipients of an airdrop must take an additional step to provide their zaddr for Zcash or zkaddress for zkBob before funds can be distributed. One can make a new circuit (and in EVM’s app layer it’s much easier vs L1s) that allows sending a cleartext address instead of zkaddr but with a different way of nullifying distributed amount. The proof of ownership is generated on the client side and embedded in the memo along with the corresponding zkAddress. This enables funds to be directly allocated to the shielded address. Are we on the same page now? :slight_smile:

If privacy protocols get embedded further, are we giving tools to similar parties while still leaving the people we’d want to see protected likely defaulting to less private options?

I don’t believe protection and privacy are opposite, in fact i’d argue they go hand in hand.

If privacy becomes either the default or only choice for users, are we going to backpedal in terms of how lawmakers look at Ethereum?

Navigating that but still pushing ahead is the strategy we need to do :slight_smile:

I assume analyzing on-chain behavior will only become more relevant in creating better UX for users.

It’s definitely a tradeoff and a spectrum of possibilities. But i’d just say look at how web2 incentives to bring better UX with data collections and how it’s then turned against users.

In any case, these are good considerations but fall outside to PSE roadmap feedback :slight_smile:

Hey,
we’ve been working on a paper for a project which seems to be perfectly aligned with PSE goals,
Incendia, an on-chain zk voting system based on proof-of-burn. Instead of relying on heavy MPC/FHE or trusted coordinators, voters burn tokens to pseudorandom addresses and submit zk-SNARK proofs of valid burns. Which gives:
– on-chain tallying with no external parties
– unlinkability + plausible deniability
– ~27k gas per tally even at scale

The paper is present in the eprint under the name :
Burn Your Vote: Decentralized and Publicly Verifiable Anonymous Voting at Scale
We also have a simple implementation and and tries getting some feedback in ethresear website and people seemed to be interested in the idea.
would love to have a conversation or a collaboration with the PSE team.

2 Likes

Hello, I know about Incendia! I remember seeing the idea in a post some time ago (ethresear.ch ?) and I know MACI/Private Governance team looked at it. I’d be great to chat @andyguzmaneth.11 on Signal or @andyguzmaneth on TG

1 Like

It’s always exciting to see clarity and long term thinking when it comes to something as important & fundamental as privacy.

Given that a new roadmap’s been published, now is probably a good time to suggest an expansion of scope. Like everyone else, I’m very excited about FHE, WE, ZK and all the magic of cryptography. I do think we tend to leave out an important and promising area: secure hardware.

I realise that common TEEs like Intel SGX have a bit of a checkered security record to say the least, but evaluating a field of solutions based on the merits of one implementation that is available now is crazy. That’s definitely not what we do with other primitives - most of the time we’re banking on algorithmic speedups and hardware acceleration to shave off many orders of magnitude of overhead. There are plenty of reasons to believe that TEEs can be improved significantly. Probably the most significant is that TEEs like SGX simply weren’t built with the security standards of the crypto world in mind. There are a lot of well-understood security mechanisms that simply aren’t included in today’s products because they are built for a web2 user base for whom 10% overhead is intolerable. The research has advanced and the open hardware ecosystem is growing.

Of course, TEEs aren’t a panacea and distributed cryptography + TEEs is a much more robust way of storing and computing over shared secret data. However, given just how much usage TEEs are seeing today and the way’s in which they can complement software crypto (e.g. collusion protection, or integrity for FHE), it would make sense to consider secure hardware among the privacy primitives to advance.

I’ve been looking into improving TEEs for over a year and I’m happy to answer any questions :slight_smile:

I make a fuller argument in a post titles ā€œTEEs Are The Worst They’ll Ever Beā€ on the Flashbots Forum (sorry I don’t have perms to post links yet).

2 Likes

here is the link btw

2 Likes

Thanks for the response. I’m not sure if we’re on the same page; I might not be understanding you properly. Let’s use a concrete example: Let’s say I want to airdrop to people who contributed over a certain value threshold to a specific Gitcoin funding round (an idea once popular). In a world where there are fully shielded transactions, the airdropper would ofc be able to airdrop to any publicly contributing addresses, but how would they airdrop to the people who preferred to shield their transactions? iiuc, you’re proposing that there are ways that they can provide a (locally-generated) message proving that they meet the requirements and also provide an address to airdrop funds to. If they have good etiquette, I’m assuming the eligible addr is A, the message will be sent from fresh addr B, and the funds will be airdropped to B or another addr not connected with A, effectively shielding A from revealing that they contributed. (The addr part is simplified bc I see that you’re referring to two different kinds of addrs, and while I’m aware of a similar concept from protocols like Umbra and I think Railgun, I’m not sure that’s what you’re referring to and don’t have a great idea of how they work.)

If this is correct, it does assume proper implementation across the wallet and the airdropper imho. The wallet would need to know to send the message from a fresh addr, and the airdropper would need to be configured to properly process the correct message. One thing that concerns me is a malicious airdropper, one that asks for a proof much more compromising. How easy will wallets make it to determine how much information is being revealed compared to much could and should theoretically be concealed? In an even worse case scenario, I wonder if a setup like this leads to middlemen constantly offering small incentives for on-chain data, something akin to people willingly handing over their tx data bc of being offered something or another, rendering the privacy meaningless as more and more of the forest becomes illuminated, at least to the centralized entities with the data.

I’m not saying that means we shouldn’t try. Some privacy is better than no privacy. I’m just advocating for exploring these issues and trying to build with them in mind. Web2 has seen the enduser gladly sacrifice their online privacy from lack of awareness and lack of concrete immediate consequences, and I’d hate to see the same antipattern emerge in web3.

Thanks for the response!

I don’t believe protection and privacy are opposite, in fact i’d argue they go hand in hand.

This was in response to a specific question about how we make sure that the ā€˜legitimate’ end users are protected and also wondering about how much we want to make tools that hide the malicious users, and most of all, make sure that we don’t end up in a place where regular users compromise their data without realizing it and more proficient malicious actors can hide from authorities. Looking at these issues, how can privacy and protection be made to go hand in hand?

But i’d just say look at how web2 incentives to bring better UX with data collections and how it’s then turned against users.

That is precisely my point. How do we stop it from happening again?