All Problems in computer science can be solved by another level of indirection.
Diomidis Spinellis used this famous quote, often attributed to David Wheeler but sometimes also to Butler Lampson, to begin his chapter on code structure and usefulness of abstraction and layering in dealing with internal complexity in the famous book Beautiful Code.
I first came across this quote in a talk by Vmware co-founder Mendel Rosenblum in context of virtualization and how a level of indirection between software and hardware makes all sorts of magic possible. Since then it has kind of stuck with me.
But what does this have to do with Apple Pay and Identity Theft? Motivated by all the recent talk around continuing adoption of Apple Pay, I decided to read up on its architecture and also setup my iPhone Passbook. I had questions around storage of credit card numbers and other transaction details within iPhone or on Apple Servers which I wanted to better understand.
What I learnt is that the credit card numbers are not stored either in iPhone or on Apple Servers. The credit card registration process involves the iPhone communicating with the Issuing Bank’s servers to get an identifier and only this identifier is stored on the iPhone and Apple Servers. Payment transaction information include this identifier. The Issuing Bank de-references this identifier to the actual credit card number for billing purposes.
The beauty of this architecture is that even if someone gets hold of this identifier then it is not of as much use as the credit card number itself. A tough problem of securing credit card numbers, which had to be stored at multiple places such as merchant and acquiring bank’s system for the traditional processing of payments, has been elegantly solved by a level of indirection!
Of course, the same mechanism can be used to secure other personally identifiable information such as driving license number, social security number etc. in a similar manner.
And so, we may see the tough problem of protecting against identity theft in the physical world one day solved by a level of indirection, an old technique of computer science!
When there is choice between incorporating security in low level communication infrastructure and making endpoints themselves security aware, go for the later. This argument advocated in 1981 paper End-to-End Arguments in System Design seems to have played out quite nicely in case of DNSSEC.
Reading Thomas & Erin Ptacek’s well presented arguments against DNSSEC and the follow-on FAQ via this hacker news post got me interested in the topic and led to an afternoon of following links and perusing arcane IETF drafts and academic papers. What I discovered is an insightful lesson in system design using end to end arguments. Thought I should capture these, but let me first cover some historical background.
DNS Security Extensions, aka DNSSEC, was proposed as a solution to a class of attacks that relied on DNS clients trusting responses from DNS regarding hostname and IP address mapping and building their own security model based on that. The best illustration of one such attack is in Security and Network researcher Steven M. Bellovin’s famous paper Using the Domain Name System for System Break-ins.This paper shows how Unix tools rlogin, rcp and rsh can easily be used to allow an untrusted evil machine to login to the victim machine or run any command. These commands and their corresponding daemons rely on white-listing trusted hosts by domain names in files hosts.equiv and .rhosts. However, the DNS architecture allows even a DNS server under the control of a malicious evil party to convince the victim machine that the evil machine is one of the trusted ones. The actual mechanics of how this happens is quite involved and not that relevant for the current discussion. DNSSEC introduces the notion of signed DNS responses and makes it much harder of a malicious DNS server to allow such attacks.
The security issues with rlogin and related tools were so fundamental and severe that the industry moved to address them quickly by adopting an entirely different set of tools for similar end results: ssh, scp and rsync. These tools completely bypass trust model based on domain names and rely on public and private key pair and cryptography to establish trust between accounts on two different machines.
From architectural perspective, this is an excellent example of ensuring security by moving it away from lower level communication infrastructure (ie; DNS) to end points (ie; the communicating machines and accounts) themselves.
DNSSEC, whose purpose is to incorporate a form of security at the core of Internet, is still struggling for adoption.
Creating a product or service for a new marker is hard, but even harder is identifying the whitespace in existing market and then filling that with a new product. This is what the serial entrepreneur John Osher accomplished in 2000 with SpinBrush, a battery powered toothbrush that sold for just $6 when others sold for more than $50, and later sold it to P&G for $475 million. We, the ELPP class of fall 2014, had learnt about John Osher as part of Dr. John’s Products, Ltd. case study and were fascinated by his story. Each of us wanted to know more about how did John go about creating SpinBrush. The case study did a good job of presenting the business case, but said precious little about John’s way of thinking.
So when I came across a few of his quotes while reading Follow The Other Hand, an excellent book by Andy Cohen on looking at things differently, I decided to reproduce them here.
On why electric toothbrush manufacturers didn’t create something like SpinBrush first:
They were all following the wrong hand.They were going against each other saying these are the rules of the electric toothbrushes. They must cost $50 to $75. They must last three years and do all this and all that. And they all use Rolls-Royce-quality motors and expensive parts, and they are basically relatively low volume-high margin. And that’s where their whole focus is. Nobody, because of that, would even think that it’s possible to manufacture something for $1.40.
He goes on to add how he, creator of SpinPop, a battery powered lollipop that spun in mouth, could look at the whole problem from a different perspective:
Now I could because I’ve been making battery-powered candy items for 80 cents. I’ve 60 cents more to spend. What have I got to do> I have to water proof it. I have to add bristles. I have to get enough torque. But otherwise, I am using similar motors, batteries, packaging, and everything else. SO I’ve got 60 cents to do this. My view could not have been understood by those companies. It would have never entered into their minds.
On seeking truth and not confirmation:
The truth always comes out with the consumer. And some truths are very simple. You’ll show a package and it’s real pretty and you’ll notice everyone likes how pretty it is. But if you take two seconds to ask them what it says you realize they couldn’t read it. Your fonts are not right or the contrast isn’t good – simple things that could make the giant difference between success and failure.
On your own experience:
We were on a conference call reviewing a prototype that had the bristles going up and down while the other one turned. We had twenty seven people on the call, and they are reading the results of the consumer test. They are going through numbers and charts and all this crap. And I said, “Well, wait a minute. I don’t get it. What did you guys think of the product? I mean, you know, sure we’ll read what they all said in these tests and everything, but we have got twenty-seven people here. What’d you guys think of it?” Not one of them had tried it.
It is always fascinating to follow the specifics of design considerations and the relative trade offs when one alternative is chosen over another equally persuasive one. In my line of work, most such decisions and discussions are within the organisational context, and even when they are generic in nature, they are rarely fit for external publishing. However, every now and then there is an opportunity to watch a public discussion on two seemingly equivalent approaches to solve a technical problem and then one being chosen via vote. The decision to develop and adopt ALPN by HTTP2 WG over NPN proposed by Google as part of SPDY presented one such opportunity.
The Context. Acknowledging that most readers of this blog are unlikely to know all these acronyms I snuck in just now and the problem they are trying to address, here is a brief overview: HTTP 1.1, the current messaging protocol in use to power the Web, is terribly inefficient to get all the bits from servers that make up a rich modern web page stuffed with images, CSS and Javascript produced effects, and manifests as slower page load time. To address this, Google started experimenting with an alternative protocol called SPDY. Later on, an IETF WG was formed to standardize this approach as HTTP2, a major revision of HTTP 1.1. You can read more about it in lead cURL developer Daniel Stenberg’s excellent HTTP2 Explained paper.
The Problem Statement. One of the problems to be addressed by the new protocol is how to negotiate use of HTTP2 over port 80 (used by http:// ) and 443 (used by https:// ) between the client, usually the Browser, and the Web server. The straightforward solution using different port numbers, and hence protocol schemas such as http2:// or http2s:// suffers from the obvious problem of not being compatible with prolific existing URLs embedded in current documents and pages. Another solution could be to let the client use an HTTP 1.1 header to indicate preference for using HTTP2 in the initial request and then let the server switch to HTTP2 for subsequent interactions over the same TCP connection. In fact, this is how the HTTP2 spec. requires the negotiation to happen for the initial connection to access http:// URLs. However, this approach introduces an additional round trip, and hence increases latency between client and server, somewhat defeating the primary goal of speeding up the Web, and is certainly less than ideal. However the Web is increasingly using https:// URLs, the real interesting problem to be solved is the negotiation when the transport is TLS.
The Solution Alternatives. Google SPDY specification used TLS Next Protocol Negotiation (NPN) to negotiate use of SPDY. Here is how it works: As part of TLS connection setup, the client indicates desire to use next protocol in its ClientHello message, the server sends back the list of all protocols supported, the client decides which protocol to use and sends that over to the server AFTER the encryption to protect the contents has been established. The key aspect is that the client makes the decision and the chosen protocol is sent to the server on encrypted transport. However, the HTTP2 WG cam up with an alternate solution: Application Layer Protocol Extension (ALPN) for TLS. In this solution, the client sends a list of all the supported protocols as part of the ClientHello message and the server makes the decision. The list by the client and the server’s choice are exchanged before the connection has been protected with encryption.
The Trade offs. So what are the trade offs between NPN and ALPN? NPN seems to provide better security and do have a few ardent supporters. ALPN allows reverse proxies and load balancers to do their job more efficiently and may also allow a server to reroute the connection or use a different certificate based on the application protocol. It also follows the established TLS architecture of extensions use for SNI extension (so that multiple virtual hosts, each having its own fully qualified domain name, may accept HTTPS connections on same IP address).
The final decision is to extend TLS with ALPN. A reasonably good decision, IMHO.
Graduation ceremony at UC Berkeley campus, including the final lecture by professor Ikhlaq Sidhu and followed by reception at Faculty Club, was a well planned and well attended event and an occasion to cement many newly formed friendships.
However, the best part was bumbling around the campus in the cold winter night with four other Yahoos, trying to navigate back to the parking lot.
Accounting seminars delivered by Suneel Udpa of Haas School of Business, contrary to everyone’s expectations, were actually fun and entertaining, and at the same time, very informative. I had followed HP’s acquisition of Autonomy and subsequent huge write-downs closely, but had no idea of accounting involved in the deal. If the purpose of the course was to equip us with a new perspective to look at things around us then it certainly succeeded.
Negotiations workshops by Holy Schroth were another of those classes that had a huge impact on how I should go about getting things done – especially when getting things done involves influencing a group of people. The first role play in the workshop was a classroom exercise where two of us had to come up with a weeklong getaway plan for the whole workgroup with the objective of maximizing our combined payoff. Each of us were given a sheet with payoff points for each of the parameters: getaway location, duration, mode of transportation etc. My partner and I negotiated very hard on each of the parameters, and yet ended up with lowest combined payoff in the class. Turned out that we were too much focused on maximizing our own payoffs, didn’t try to seek common grounds, not realizing that our sheets had different payoff points and it was possible to have higher combined payoff without bargaining too hard on each parameter. Of course, this was just a class role play, but the larger point was that the real life is no different and we must always approach a situation with that in mind.
I have already written about the group project and the various case studies in this blog. Each experience was unique in its own way!
Found the story of a very famous man, thus told, really inspirational.
Age 7: His family was forced out of their home and he worked to support his family.
Age 9: His mother passed away.
Age 19: His sister died.
Age 22: A business venture failed.
Age 23: He ran for the State Legislature, but lost. In the same year, he also lost his job. He wanted to go to law school but couldn’t get in. So he self-taught himself by voraciously reading whatever he could lay his hands on.
Age 24: He borrowed money from a friend to start a business. But the business failed and by the end of the year, he was bankrupt. At this point, he launched his legal and political career. His height and mastery over public speaking helped him succeed in these professions.
Age 25: He ran for the State Legislature again and this time, won.
Age 26: The year was looking better as he was engaged to be married. Unfortunately, his would be fiancee died and he was grief stricken. Next year he had a total nervous breakdown and for 6 months was bedridden.
Age 27: He sought to become Speaker of the State Legislature, but was defeated.
Age 31: He sought to become Elector, but was defeated.
Age 33: Got married with a wealthy lady and had 4 sons over time.
Age 34: He ran for Congress, but lost.
Age 37: He ran for Congress again. This time he won and moved to Washington.
Age 39: He ran for re-election to Congress and lost.
Age 40: He sought the job of Land Officer in his home state, but didn’t get the job.
Age 41: His 4 year old son died.
Age 45: He ran for the Senate of the United states and lost.
Age 47: He sought the Vice Presidential nomination at a national convention. He got less than 100 votes and lost.
Age 49: He ran for the Senate again, but lost again.
Age 51: He is elected President of the United States.
Age 53: His 12 year old son died.
Age 56: He is assassinated.
Yes, this is the story of the great US president, Abraham Lincoln.
The typical commission paid to brokers for selling an existing residential home is 6%, half of which goes to the seller’s agent and the rest to the buyer’s agent. In recent years, the average percentage rate has come down a bit, mostly due to competition among brokers, but is still quite high at above 5.5%. This is puzzling as this rate is much lower in many of the OECD countries: 1-2% in UK, 3% in Japan, 2-3% in Norway, 1.5-2% in Singapore, and only 1% in Hongkong.
US Deptt. of Justice investigated a number of Real Estate brokerage practices, enforced via associations of realtors, that could have reduced real competition and kept prices up: not allowing access to home listings to discount brokers and setting rules that reduced competition. Most of these disputes have been settled and those practices banned around 2008. Still the commission rates have remained largely unchanged.
why?
A plausible answer comes form the following observation (source):
In the UK prior to a 1970 order, national and local associations had fee schedules for sellers “which were typically tapered from 5% to 1.5% or were fixed between 1.5-2.5%.” These fee schedules were allegedly “recommended,” but the associations also had rules forbidding price competition. The average commission rates varied across markets, but they were relatively stable in local markets. Such schedules could be enforced through non-cooperation. That is, brokers that were charging reduced commissions would not receive the same levels of cooperation as brokers charging the “standard” commission. A 1970 order by the UK government banned fee schedules for real estate brokers. By 1979, commission rates in the South of England had fallen from 2.8% to 2% and those in the North from 2.3% to 1.8%.
Although there is no legally mandated commission rate in US, a 6% rate to be paid by seller, and split between the two brokers, is an accepted fact and is rarely challenged. The agents have been very successful in maintaining this as the official rate and it has become the anchor point for any subsequent negotiation. Knowledgeable buyers and sellers are able to negotiate rebates but the reduction in fee is usually quite small.
In my opinion, part of the problem is the market structure: there are two agents, one for the buyer and another for the seller, and their brokerage houses, each getting only 1.5% of the purchase price. Relatively easy entry into the market as a realtor, and relatively high payoff per deal keeps the market flush with a large number of realtors (as per NAR membership report, there are more than thousand local associations and more than a million members in the country), depressing average income, even when the commission rates are quite high.
Although the prospective sellers and buyers are visiting sites like Zillow and Trulia to do the market research and find their dream homes, as much as 88% of the transactions still involve a real estate agent and their commissions rates haven’t gone down all that much. Zillow and Trulia, who have announced their merger, provide enhanced information to its visitors and make money by advertising and generating leads for brokers, rather than assisting buyers or sellers to close transactions online. Discount online brokers such as RedFin are actually struggling and haven’t seen real success.
What exactly is going on here? Why has the Internet not been able to disrupt residential real estate transactions? This business week article offers many answers: the psychological and logistical support provided by the agents for a once-in-lifetime-for-most and highly complex transaction; subtle collusion among agents; the dominant role of National Association of Realtors and so on.
It is clear that human advisers have a role to play in real estate transactions. However, there are areas where technology can add value and bring efficiency:
The current practice for a seller is to contract with a seller agent for listing who gets the commission whether she brings the buyer or not. This might have made sense when the technology to upload a property listing was complicated and required specialized expertise, but is no longer true.
One of the services provided by buyer’s agent is that she can take the buyer on physical tour of selected properties using her access to keys. Technology, especially those enabled by smart phones, can certainly allow prospective buyers to get access to the property in an auditable and traceable manner without an agent.
More specialized services involving humans, such as consultation with a real estate expert, staging the house, home appraisal, home inspection etc. can be farmed out to service providers for fixed fee.
Just because no one has yet figured out how to disrupt this market doesn’t mean that it is not possible. In fact, our ELPP Project-1 is exactly to figure out what and how of “Information Technology and Real Estate Market”.
Netflix is a model management case study for a number of reasons:
It disrupted a highly profitable and growing incumbent video rental company Blockbuster by riding the right technology disruption wave, the transition of video content from VHS tape to DVD.
It has successfully extended its DVD-by-mail to include video streaming in presence of well-funded players in this new market.
It has successfully innovated and executed-well on many fronts and over a period of time to consolidate its position.
It is particularly relevant to me as an Internet technologist, a Netflix subscriber, a past investor and having friends who have worked for this company.
The HBS case study we used in the class was published in May 2007 and presents a choices for entering the video-on-demand market. Another study that I found on the Internet covers the trajectory of the company upto 2012 and presents strategic options for growth.
It is worth recapping the major decisions made and superbly executed by Netflix over time:
In year 2000, Netflix described itself as the ultimate online destination for movie enthusiasts, with services such as price comparisons, theater tickets, recommendations etc. However, it eventually focused on DVD rental business.
It developed cross-promotional programs, such as including Netflix coupons in the packaging, with manufacturers and sellers of DVD players at a time when success of the DVD media was less than certain. This allowed Netflix to gain mind share among movie watching population.
It changed it pricing from pay-per-DVD-with-penalty-for-late-return to monthly subscription and no late fees to increase return customers. This not only allowed Netflix to differentiate from its competitors but also increase stickiness.
To better manage the expense of acquiring DVDs, Netflix needed to distribute demand over all its inventory and not have them concentrated into the latest releases. It accomplished this through personalized recommendations and then applying filter based on available inventory. To further manage this expense, Netflix cut deals with video production companies to reduce the upfront fee per title in return for a fee based on title’s total number of rentals for a given period of time.
Netflix eventually became a preferred outlet for niche movies and documentaries that could not be profitably released into theaters but still had a significant viewership over the whole US population.
Setting up the distribution center all over the country and partnership with USPS allowed Netflix to optimize delivery and return times, and hence improve utilization of its DVDs.
Having a queue of in-waiting videos made it difficult for existing subscribers to leave.
Partnership with DVR manufacturers, set-top boxes and game consoles allowed Netflix to enter into the streaming market for TVs.
Althought Netflix tried to separate its streaming business from DVD rental business, it responded to strong customer backlash by keeping them as part of one business but with different subscription plans. This strategy has worked quite well.
Success of Netflix originals such as House of Cards has somewhat reduced its dependence on content creators to retain its existing subscribers.
You may understand the mechanics of driving very well — the purpose of gears, steering wheel, brakes, accelerator etc. and may even know how and when to use them — and still not be a good driver. The only thing that makes one an expert driver is lots of practice. So much so that slowing down, speeding up, taking turns, avoiding an oncoming car becomes second nature, without needing any explicit thought.
The same is true in many walks of expertise: Maths, Programming, and Leadership being foremost among those I deal with.
True experts have a profound conceptual understanding of their field. But the expertise built the profound conceptual understanding, not the other way around. There’s a big difference between the “ah-ha” light bulb, as understanding begins to glimmer, and real mastery.
The development of true expertise involves extensive practice so that the fundamental neural architectures that underpin true expertise have time to grow and deepen. This involves plenty of repetition in a flexible variety of circumstances.