Nothing Special   »   [go: up one dir, main page]

December 18, 2024

Archives for July 2011

Retiring FedThread

Nearly two years ago, the Federal Register was published in a structured XML format for the first time. This was a big deal in the open government world: the Federal Register, often called the daily newspaper of our federal government, is one of our government’s most widely read publications. And while it could previously be read in paper and PDF forms, it wasn’t easy to digitally manipulate. The XML release changed all this.

When we heard this was happening, four of us here at CITP—Ari Feldman, Bill Zeller, Joe Calandrino, and myself—decided to see how we might be able to improve how citizens could interact with the Federal Register. Our big idea was to make it easy for anyone to comment paragraph-by-paragraph on any of its documents, like a proposed regulation. The site, which we called FedThread, would provide an informal public forum for annotating these documents, and we hoped it would lead to useful online discussions about the merits and weaknesses of all kinds of federal regulatory activity. We also added other useful features, like a full-text search engine and custom RSS feeds. Building these features for the Federal Register only became a straightforward task because of the new XML version. We built the site in just eight days from conception to release.

Another trio of developers in SF also saw opportunities in this free machine-readable resource and developed their own project called GovPulse, which had already won the Sunlight Foundation’s Apps for America 2 contest. They were then approached by the staff of the Federal Register last summer to expand their site to create what would become the new online face of the publication, Federal Register 2.0. Their approach to user comments actually guided users into participating in the formal regulatory comment process—a great idea. Federal Register 2.0 included several features present in FedThread, and many more. Everything was done using open source tools, and made available to the public as open source.

This has left little reason for us to continue operating FedThread. It has continued to reliably provide the features we developed two years ago, but our regular users will find it straightforward to transition to the similar (and often superior) search and subscription features on Federal Register 2.0. So, we’re retiring FedThread. However, the code that we developed will continue to be available, and we hope that enterprising developers will find components to re-use in their own projects that benefit society. For instance, the general purpose paragraph-commenting code that we developed can be useful in a variety of projects. Of course, that code itself was an adaptation of the code supporting another open source project—the Django Book, a free set of documentation about the web framework that we were using to build FedThread (but this is what developers would call a “meta” observation).

Ideally, this is how hacking open government should work. Free machine readable data sets beget useful new ways for citizens to explore those data and make it useful to other citizens. Along the way, they experiment with different ideas, some of which catch on and others of which serve as fodder for the next great idea. This happens faster than standard government contracting, and often produces more innovative results.

Finally, a big thanks to the GPO, NARA and the White House Open Government Initiative for making FedThread possible and for helping to demonstrate that this approach can work, and congratulations on the fantastic Federal Register 2.0.

Telex and Ethan Zuckerman's "Cute Cat Theory" of Internet Censorship

A few years ago, Ethan Zuckerman gave a talk at CITP on his “cute cat theory” of internet censorship (see also NY Times article), which goes something like this:

Most internet users use the internet and social media tools for harmless activities, like looking at pictures of kittens online. However, an open social media site is open to political content as well as pictures of kittens. Repressive governments might attempt to block this political content by blocking access to, say, all of Blogspot or all of Twitter, but in doing so they also block people from looking at non-political content, like pictures of cute kittens. This both brings more attention to the political causes the government is trying to suppress through the Streisand effect, and can politicize users who previously just wanted unfettered access to cute kittens.

This is great for Web 2.0, and suggests that activists should host their blogs on sites where a lot of kittens would be taken down as collateral damage should they be blocked.

However, what happens when a government is perfectly willing to block all social media? What if a user wants to do more than produce political content on the web?

Telex (blog post) can be seen as a technological method of implementing the cute cat theory for the entire internet: the system allows a user to circumvent internet censorship by executing a secret knock on potentially any web site outside of the censor’s network. When any web site, no matter how innocuous or critical to business or political infrastructure, can be used for a political goal in this fashion, the censorship/anti-censorship cat-and-mouse game is elevated beyond single proxies and lists of blockable Tor nodes, and beyond kittens, to the entire internet.

Anticensorship in the Internet's Infrastructure

I’m pleased to announce a research result that Eric Wustrow, Scott Wolchok, Ian Goldberg, and I have been working on for the past 18 months: Telex, a new approach to circumventing state-level Internet censorship. Telex is markedly different from past anticensorship efforts, and we believe it has the potential to shift the balance of power in the censorship arms race.

What makes Telex different from previous approaches:

  • Telex operates in the network infrastructure — at any ISP between the censor’s network and non-blocked portions of the Internet — rather than at network end points. This approach, which we call “end-to-middle” proxying, can make the system robust against countermeasures (such as blocking) by the censor.
  • Telex focuses on avoiding detection by the censor. That is, it allows a user to circumvent a censor without alerting the censor to the act of circumvention. It complements anonymizing services like Tor (which focus on hiding with whom the user is attempting to communicate instead of that that the user is attempting to have an anonymous conversation) rather than replacing them.
  • Telex employs a form of deep-packet inspection — a technology sometimes used to censor communication — and repurposes it to circumvent censorship.
  • Other systems require distributing secrets, such as encryption keys or IP addresses, to individual users. If the censor discovers these secrets, it can block the system. With Telex, there are no secrets that need to be communicated to users in advance, only the publicly available client software.
  • Telex can provide a state-level response to state-level censorship. We envision that friendly countries would create incentives for ISPs to deploy Telex.

For more information, keep reading, or visit the Telex website.

The Problem

Government Internet censors generally use firewalls in their network to block traffic bound for certain destinations, or containing particular content. For Telex, we assume that the censor government desires generally to allow Internet access (for economic or political reasons) while still preventing access to specifically blacklisted content and sites. That means Telex doesn’t help in cases where a government pulls the plug on the Internet entirely. We further assume that the censor allows access to at least some secure HTTPS websites. This is a safe assumption, since blocking all HTTPS traffic would cut off practically every site that uses password logins.

<!– –>

Many anticensorship systems work by making an encrypted connection (called a “tunnel”) from the user’s computer to a trusted proxy server located outside the censor’s network. This server relays requests to censored websites and returns the responses to the user over the encrypted tunnel. This approach leads to a cat-and-mouse game, where the censor attempts to discover and block the proxy servers. Users need to learn the address and login information for a proxy server somehow, and it’s very difficult to broadcast this information to a large number of users without the censor also learning it.

How Telex Works

Telex turns this approach on its head to create what is essentially a proxy server without an IP address. In fact, users don’t need to know any secrets to connect. The user installs a Telex client app (perhaps by downloading it from an intermittently available website or by making a copy from a friend). When the user wants to visit a blacklisted site, the client establishes an encrypted HTTPS connection to a non-blacklisted web server outside the censor’s network, which could be a normal site that the user regularly visits. Since the connection looks normal, the censor allows it, but this connection is only a decoy.

The client secretly marks the connection as a Telex request by inserting a cryptographic tag into the headers. We construct this tag using a mechanism called public-key steganography. This means anyone can tag a connection using only publicly available information, but only the Telex service (using a private key) can recognize that a connection has been tagged.

As the connection travels over the Internet en route to the non-blacklisted site, it passes through routers at various ISPs in the core of the network. We envision that some of these ISPs would deploy equipment we call Telex stations. These devices hold a private key that lets them recognize tagged connections from Telex clients and decrypt these HTTPS connections. The stations then divert the connections to anti­censorship services, such as proxy servers or Tor entry points, which clients can use to access blocked sites. This creates an encrypted tunnel between the Telex user and Telex station at the ISP, redirecting connections to any site on the Internet.

<!– –>

Telex doesn’t require active participation from the censored websites, or from the non-censored sites that serve as the apparent connection destinations. However, it does rely on ISPs to deploy Telex stations on network paths between the censor’s network and many popular Internet destinations. Widespread ISP deployment might require incentives from governments.

Development so Far

At this point, Telex is a concept rather than a production system. It’s far from ready for real users, but we have developed proof-of-concept software for researchers to experiment with. So far, there’s only one Telex station, on a mock ISP that we’re operating in our lab. Nevertheless, we have been using Telex for our daily web browsing for the past four months, and we’re pleased with the performance and stability. We’ve even tested it using a client in Beijing and streamed HD YouTube videos, in spite of YouTube being censored there.

Telex illustrates how it is possible to shift the balance of power in the censorship arms race, by thinking big about the problem. We hope our work will inspire discussion and further research about the future of anticensorship technology.

You can find more information and prototype software at the Telex website, or read our technical paper, which will appear at Usenix Security 2011 in August.

Yet again, why banking online .NE. voting online

One of the most common questions I get is “if I can bank online, why can’t I vote online”. A recently released (but undated) document ”Supplement to Authentication in an Internet Banking Environment” from the Federal Financial Institutions Examination Council addresses some of the risks of online banking. Krebs on Security has a nice writeup of the issues, noting that the guidelines call for ‘layered security
programs’ to deal with these riskier transactions, such as:

  1. methods for detecting transaction anomalies;

  2. dual transaction authorization through different access devices;

  3. the use of out-of-band verification for transactions;

  4. the use of ‘positive pay’ and debit blocks to appropriately limit
    the transactional use of an account;

  5. ‘enhanced controls over account activities,’ such as transaction
    value thresholds, payment recipients, the number of transactions
    allowed per day and allowable payment days and times; and

  6. ’enhanced customer education to increase awareness of the fraud
    risk and effective techniques customers can use to mitigate the
    risk.’

[I’ve replaced bullets with numbers in Krebs’ posting in the above list to make it
easier to reference below.]

So what does this have to do with voting? Well, if you look at them
in turn and consider how you’d apply them to a voting system:

  1. One could hypothesize doing this – if 90% of the people in a
    precinct vote R or D, that’s not a good sign – but too late to do
    much. Suggesting that there be personalized anomaly detectors (e.g.,
    “you usually vote R but it looks like you’re voting D today, are you
    sure?”) would not be well received by most voters!

  2. This is the focus of a lot of work – but it increases the effort for the voter.

  3. Same as #2. But have to be careful that we don’t make it too hard
    for the voter! See for example SpeakUp: Remote Unsupervised Voting as an example of how this might be done.

  4. I don’t see how that would apply to voting, although in places like Estonia where you’re allowed to vote more than once (but only the last vote counts) one could imagine limiting the number of votes that can be cast by one ID. Limiting the number of votes from a single IP address is a natural application – but since many ISPs use the same (or a few) IP addresses for all of their customers thanks to NAT, this would disenfranchise their customers.

  5. “You don’t usually vote in primaries, so we’re not going to let you
    vote in this one either.” Yeah, right!

  6. This is about the only one that could help – and try doing it on
    the budget of an election office!

Unsaid, but of course implied by the financial industry list is that the goal is to reduce fraud to a manageable level. I’ve heard that 1% to 2% of the online banking transactions are fraudulent, and at that level it’s clearly not putting banks out of business (judging by profit numbers). However, whether we can accept as high a level of fraud in voting as in banking is another question.

None of this is to criticize the financial industry’s efforts to improve security! Rather, it’s to point out that try as we might, just because we can bank online doesn’t mean we should vote online.