The single, most important thing you want to understand about your API is how you can generate revenue from it. It all depends on who your customers are and how they can obtain value through your API.
Some firehoses to
drink read from, other than twitter’s:
- Google Buzz, through their activity firehose API methods;
- FriendFeed, using their real-time updates methods.
Stowe Boyd writes about a possible approach to address different user profiles on different clients. After exposing a possible microformats based approach he talks about a microsyntax solution to the problem:
A microsyntax approach would be something visible in the stream, like including a prefix in front of an @mention or retweet to denote original or desired client:
dabr@pigsonthewing Nice profile!
Here, the implication is that I am referring to Andy’s profile on the Dabr client.
I disagree with the proposed syntax format and propose a new one: @user@client. Here’s my rationale behind it:
- It doesn’t break any functionality (just like the proposed syntax): user will still be able to see this tweet as a mention;
- It adds new functionality: if client is also a twitter user it will also be able to see this tweet as a mention and act upon it, if desired;
- It’s more meaningful than the proposed syntax: you can read it as “user at client”. In the original syntax, saying “client at user” just seems awkward.
Afterthought question: can this microsyntax be expanded to @user@application so that we can finally address any user on any Web application easily. I’m thinking about that.
Read the original post: Twitter URLs: Are Microformats The Answer To A Real Problem? on /Message.
This is a post to test a communication flow using a feed reader and a blog post as a “distributed twitter”.
Read the original story: Where is RSS? on Scripting News.
Now that twitter OAuth feature is in public beta, some people argue that twitter should also provide a data storage facility through their API. This would allow developers to add payloads to different twitter objects but would create another information silo, instead of leveraging existing data storage providers.
I agree with Kevin Marks when he asks “Why should the storage and the event stream come from the same provider?”.
Quoting Dave Winer’s article “Twitter and OAuth, interesting brew“:
Maybe it won’t be Twitter or Facebook, but whoever builds the next consensus platform will have open data storage APIs in addition to identity. It’s a vital part of identity. We’ve been waiting too damned long for this.
Let’s see what happens.
Quoting the twitter status blog post “Twitter Status – A note about per day following limits.”:
While there are technical reasons behind having some limit on following activites, this per-day limit exists to discourage spamminess. Also, it is unlikely that anyone can actually read tweets from thousands of accounts which makes the mass following activity disingenuous.
What do you think?
“How to Share a Secret” is the title of paper written in 1979 by Adi Shamir (best known for his work on the RSA algorithm). The paper describes a method for dividing information in smaller pieces so that the knowledge of all but one of the pieces gives absolutely no information about the original information. Quoting the author:
This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
What’s so fascinating about this technique? Here’s a list of properties taken from its article on Wikipedia:
- The scheme is “information-theoretically secure”, meaning that its security isn’t bounded to computing power;
- It’s also “perfectly secure”, which means that its output gives no information whatsoever about its input;
- It’s minimal, because the size of each piece doesn’t exceed the size of the original information;
- It’s extensible, allowing the addition of new pieces without having to change existing ones;
- It’s dynamic, meaning that you can change all the pieces without changing the original data;
- It’s also flexible, allowing each party to receive a different number of pieces, related to their power within the whole chain.
If you read through the paper you’ll find an example describing a situation where a number of signatures is required to pay a check. Again, quoting the original:
If each executive is given a copy of the company’s secret signature key, the system is convenient but easy to misuse. If the cooperation of all the company’s executives is necessary in order to sign each check, the system is safe but inconvenient. The standard solution requires at least three signatures per check, and it is easy to implement with a (3, n) threshold scheme. (…) An unfaithful executive must have at least two accomplices in order to forge the company’s signature in this scheme.
Now, suppose we’re not talking about some company’s executives but instead about Web Services, and instead of a secret signature, we’re talking about users’ credentials. The bank will become some Web application where the user is registered and the money will become the user’s data on that application.
As a side note, I think this analogy describes the problems we’ve been having lately with the password anti-pattern, more specifically with third-party applications asking for your twitter credentials. Not only this situation occurs when you give your password to a third-party application, but also if you’re using other authentication mechanisms — if the third-party application is hijacked, your data can be compromised.
Now, back to our solution: suppose you build your application separating particular objects physically by using Web Services. Also, suppose these Web Services are invoked in a secure way. In this scenario, whenever you want to execute a specific task, a set of Web Services must be called in a specific order.
If each Web Service is given access to a copy of the credential, the system will be very easy to misuse. A possible solution is to divide a secret that can later on decrypt the credential — I’m not getting into details about this process right now — into different tokens that are spread across all the Web Services involved in the process.
Now, even if one part of the system is hijacked or in some way compromised, there’s still no way to decrypt the credential and use it in unexpected ways. Even if the attackers gain access to the Decrypt Web Service, they must also reproduce the list of tokens in the correct order.
You can argue that if someone eavesdrop the connection to the application where the credentials are being sent, e.g. twitter, one could still see them in plain text. That’s a whole different problem not addressed by this solution — OAuth, for instance, solves that problem by requiring the use of signatures based on one time values (nonces) known only by both sender and receiver.