Posted: May 18th, 2013 | Author: Jon Evans | Filed under: TechCrunch | Tags: Chrome, OPINION | No Comments » | 0 views
I’ve spent the last two weeks wandering around London, Paris, and Istanbul (not Constantinople.) As an experiment, I left my trusty MacBook Pro behind and brought only the $199 Chromebook on which I type this. And to my considerable surprise it has served admirably. So admirably, in fact, that I believe ChromeOS is only one or two iterations away from being the right choice for many-if not most–homes.
I was skeptical to begin with: after all, I thought, Chrome is acceptable when you’re online, but I’ll be spending much of my travel time offline, which probably makes it a non-starter, right? — So I devoted most of my Chromebook’s (bizarrely spacious) 320GB hard drive to an install of Ubuntu. Which I then never used even once.
I suppose I would have if some kind of critical work emergency had come up: after all, I’m (mostly) a software developer by trade, and ChromeOS isn’t much of a developer platform. But that didn’t happen. Good thing, too, because Linux-on-the-desktop seems as ugly and frustrating as ever for someone, even a deeply techie someone, who just wants to get things done.
ChromeOS, though, is both very pretty and almost painless. Its biggest problem is that out of the box it naively insists that you’ll be online all the time–even though it can be perfectly serviceable while disconnected. You may not have known that nowadays both GMail and (most) Google Docs can work just fine offlne.
And if you didn’t, well, Google sure isn’t about to proactively tell you. You actually have to make a point of seeking out, installing, and then activating Offline Gmail and Offline Google Docs from the Chrome Web Store. Why ChromeOS doesn’t prompt you with this option as part of the onboarding process is truly beyond me. Similarly, why on Earth are “Gmail’ and “Offline Gmail” two separate apps? Google may be full of incredibly smart people, but they can also be insanely myopic when it comes to end users.
Once those were up and running, though, my Chromebook was a charm to use under almost all circumstances. Offline, I could write documents, check old email, and even play a few free games from the Chrome Web Store, although most Chrome games still seem to require an initial server connection to start up. And online, of course, the world was my oyster.
Did I have access to all the features of, say, Word or Excel? Hell, no. (You still can’t create a Google Docs spreadsheet when offline, either.) Was it an all-guns-blazing gaming experience? Again, no, although Chrome’s rapidly evolving Native Client ought to keep matters improving here. What I could do, though, was email, play a few games, surf the Net, communicate (via GChat or Google Hangouts, which worked excellently), and write documents — which unless I’m much mistaken is pretty much everything that most people use their computers for at home.
ChromeOS still needs better, and simpler, offline support; and I’d like to see more diversity of available hardware; but once those two things are addressed, which shouldn’t take long, I would happily recommend a Chromebook to my parents the next time they upgrade. In fact I’d happily recommend one to anyone who wants a small second laptop for travel, or who doesn’t need to do serious work on their home computer.
Long ago Neal Stephenson, when comparing operating systems to vehicles, described MacOS as a hermetically sealed day-glo VW Beetle; MS Windows as a clunky two-tone station wagon; and Linux as the product of a horde of dreadlocked hippies who spent their time building M1 battle tanks and giving them away for free. Which sounds great at first, but who actually wants to drive a tank?
Well, if I may extend that a little, ChromeOS is like a sleek, shiny Airstream trailer built around that same M1 engine. There are many things it can’t do, and a bunch more at which it’s very clumsy, but within its bailiwick, casual exploring, it’s both very attractive and awfully comfortable.
I don’t think Stephenson’s original analogies quite hold any more, though. Nowadays OS X is more like a Porsche…and Windows is a gas-guzzling pickup truck, or a cube van that makes disturbing noises whenever it corners. Still suitable for work, but not particularly great for either road trips or sub/urban living — and nowadays looking nervously over its metaphorical shoulder at the flotilla of drones and self-driving cars on the horizon.
Image credit: Dan McCullough, Flickr.
Posted: May 11th, 2013 | Author: Jon Evans | Filed under: TechCrunch | Tags: AT&T, OPINION, Verizon | No Comments » | 0 views
A few days ago I landed in England and, expecting little, slipped an old UK SIM card into my phone. I’d bought it when living in London five years ago, and hadn’t used it in more than a year. But to my amazement it was still active — as was the money I’d added to its pay-as-you-go account 16 months earlier…and then I received a friendly text message informing me that my data costs were now £1 per 100MB. Another SMS popped up when I emerged from the Channel Tunnel in France a few days later, informing me it would cost me 8p to send texts and 7p per minute to receive calls.
Can you imagine any of that happening with an American phone company? Or Canadian? North American carriers generally expire pay-as-you-go accounts after 90 days of inactivity, and it’s at best a struggle to get them to support data at all, much less seamlessly, much much less at that price. (Which isn’t even that great, by global standards; in India two years ago I was charged $1 for a full gigabyte.)
As for roaming, you’re very lucky to get American or Canadian pay-as-you-go accounts that can roam across that vast undefended border at all, and if you do, they’ll charge the proverbial arm and a leg. That same UK SIM card worked just fine in Kenya last year, and as I type this I’m about to land in Turkey, where I expect to receive another text informing me that my UK pay-as-you-go number continues to work just fine outside the EU, albeit more expensively. (Update: yep.)
What’s wrong with this picture? Why are America and Canada so unbelievably awful? Yeah, I’m being anecdotal, but there is all kinds of data to support the notion that cell service there is outlandishly expensive compared to almost all of the rest of the developed world. (And worse than a lot of the developing world, too.)
Part of it is laissez-faire capitalism run amok. Don’t get me wrong. I’m a staunch defender of capitalism…that is, well-regulated capitalism. Until 2008 that was a hard row to hoe among many of my friends, but that recent embarrassing spate of financial cataclysms have made it much easer. Why is my UK SIM card relatively cheap to use in France? Because EU regulators insisted on it. Why are America’s carriers so parasitical, predatory, gouging and user-hostile? Because they can be, which in large part means because their regulators (including, alas, Canada’s CRTC) don’t insist on much of anything.
Oh, sorry, no, my mistake. They do insist on perpetuating this state of affairs. Consider the recent breathtakingly wrong decision to make it illegal under the DMCA to unlock your phone. This was one of those classic bureaucratic catastrophes: every individual step that led to it doubtless made sense to the people involved, who were too close to their system to take a step back and notice that its actual outcome was complete insanity. If anything it should should be illegal to lock phones, not unlock them. This is regulatory capture taken to new heights of Stockholm-Syndrome madness.
And yet. At the end of the day the true power lies not with the carriers, but with their customers. Alas, American and Canadian customers seem to have been hypnotized into a kind of learned helplessness where they just sit there and silently accept locked phones, bloated Kafkaesque pricing plans, insane roaming charges, Android phones stuffed with crapware, and two- or even three-year locked-in contracts.
But they don’t have to. That’s what’s so infuriating. You too could buy an unlocked phone — an unlocked Nexus 4, which is a terrific phone, costs all of $299! (And I have high hopes that Google’s rumored new X Phone initiative will be even cheaper.) You too could switch to T-Mobile’s monthly pricing plan, or Straight Talk’s, instead of signing a contract. You’d more than make back the upfront costs of the unlocked phone in less than a year. And if enough people did it, the carriers would be forced to compete on quality and improve their pricing, rather than rely on their customers’ passive despair.
The logical conclusion is that if your phone is locked, or if you’re on a multi-year contract, then you have no right to complain about your terrible carrier — because you’re part of the problem. “The fault, dear Brutus, lies not in our stars, but in ourselves, that we are underlings.” In fact, you’re ruining it for the rest of us. Thanks.
But it’s not too late for redemption. Just repeat after me: “I solemnly swear that I will never buy a locked phone or sign a multi-year phone contract again.” And when your current contract expires, do just that. Maybe, just maybe, with your help, we can finally defeat these gargantuan economic tapeworms called AT&T, Verizon, Rogers and Bell — and finally catch up with the civilized world.
Image credit: Tapeworm, by Rhys Ormond, on Flickr.
Posted: May 4th, 2013 | Author: Jon Evans | Filed under: TechCrunch | Tags: Apple, Google, iCloud, OPINION | No Comments » | 0 views
A new front has opened in the smartphone war, and for the first time in many years, Apple is both outnumbered and outgunned.
I’m not talking about the phones themselves. iOS is still better than Android, although the gap has narrowed. The next iPhone will doubtless be the best phone in the world when it’s released, as ever. It won’t be as customizable – no Swype, no Facebook Home – but those remain relatively minor inferiorities.
The new battlefront is different. The new battlefront is the cloud: Google Maps vs. Apple Maps, Siri vs. Google voice search, iCloud vs. Dropbox et al, and Google Now vs…well, nothing at all, yet. This is a big deal. As we grow accustomed to an always-online world of ubiquitous computing, your phone becomes less a device in and of itself and more a gateway to its cloud services. And it’s very hard to argue that Apple is anything but the serious underdog here.
You know they have a problem when even die-hard Apple supporter John Gruber is linking to pieces like “Apple’s Broken Promise: iCloud and Core Data,” which is replete with quotes like “If they couldn’t get iCloud working, who can?” … “It just doesn’t work” … “Many of these issues take hours to resolve and some can permanently corrupt your account” … “A developer’s worst nightmare.”
Remember when Siri was introduced, and people were pronouncing it a serious threat to Google Search itself? No, really. Haven’t heard that one in a while, have you? And not without reason; Siri seems to have stagnated, while over in Mountain View, Google is doing some truly phenomenal things with many-layered neural networks — and superior voice search is just one of the applications.
Can Apple match that? Who knows — but it’s safe to say that this kind of thing, cutting-edge technology beyond great hardware and superb design, isn’t their core strength. It’s Google’s. As is shown by Google Now, which is inexplicably treated as nothing more than Google’s answer to Siri by hordes of writers who apparently can’t think beyond simple dichotomies. It’s much more than that; until Siri tells you what you should do before you ask, there’s really no comparison.
Meanwhile, Google Now has been released to iOS, continuing Google’s ongoing battle to dominate the iPhone app space. (They’ve been quite successful; the two most-downloaded iOS apps are YouTube and Google Maps.) As TC’s Semil Shah has pointed out, thanks to Apple’s iOS restrictions, no third party could build a true iOS competitor to Google Now on Android. Only Apple itself has that power.
But will they succeed? And by the time they do, will Google have outstripped them again? Again, nobody has a crystal ball; but Google has a long history of building superb, scalable, reliable, (mostly) developer-friendly, and technically groundbreaking web services. Apple…does not.
That said, a bet against them is by no means a guaranteed win. Consider Apple Maps, which has taken great strides since its initial stumbles. And as my friend Lunatic (no, really) pointed out while debating this post with me on Twitter, it’s a bit rich to call Apple overmatched while iOS’s share of the American smartphone market still seems to be increasing faster than Android’s, and
But at the very least, on this new cloud-services battlefront, Apple is in the unfamiliar position of underachieving underdog up against the mighty Google war machine. With Google I/O and Apple WWDC both only weeks away, we can expect to find out soon whether either has a new secret weapon. Let’s hope they both do, because the great thing about this war is that when these two giants do battle, everyone else usually wins.
Image credit: Clouds over SoMa, by yours truly, on Flickr.
Posted: April 27th, 2013 | Author: Jon Evans | Filed under: TechCrunch | Tags: OPINION | No Comments » | 0 views
Credit where it’s definitely due: this post was inspired by a Twitter conversation with Box CEO Aaron Levie.
Don’t look now, but something remarkable is happening.
Instagram had twelve employees when it was purchased for $700 million; all of its actual computing power was outsourced to Amazon Web Services. Mighty ARM has only 2300 employees, but there are more than 35 billion ARM-based chips out there. They do no manufacturing; instead they license their designs to companies like Apple, who in turn contract with companies like TSMC for the actual fabrication. Nest Labs and Ubiquiti are both 200-employee hardware companies worth circa $1 billion…who subcontract their actual manufacturing out to China.
Warren Buffett has long advocated investing in businesses with “moats” around their business model. Often that moat is an economy of scale; the notion that a hundred widgets cost a dollar each but a million widgets only a dime apiece.
Obviously that doesn’t apply to software, or music, or other virtual goods. What’s less obvious is that as time goes by, and technology and interconnectivity advance, it applies less and less to the physical world as well. Industrial capacities that not long ago were available only to gargantuan corporations are today open to anyone and everyone. Amazon, Microsoft, Google, and the OpenStack providers compete to rent economies of scale for web services. Foxconn et al essentially do the same for electronics. So what happens when this trend expands into other sectors? What happens when there are Foxconns for furniture, or cars, or houses, or retail stores? And a Dronenet for transporting physical goods?
What happens is that moats dry up, and are bridged, and previously impregnable incumbents start looking very vulnerable to disruption indeed.
But wait. This is all too small. Let’s think bigger yet.
Compare and contrast Intel with ARM. The former is, historically, a vertically integrated design-and-manufacturing monolith which owns and controls everything they do, whereas the latter concentrates on being the best at the one thing they do. I have enormous respect for Intel but it seems clear that the world is trending towards ARM’s more decoupled model, wherein their designs (like TSMC’s manufacturing capacity) are made available to any and all customers.
The logical conclusion of that trend, however, is far more transformative than a mere reduction in optimal corporate size and scope: it’s this–
I might paraphrase that as “property isn’t theft; property is an inefficient distribution of resources.” It signifies a dichotomy between two very different modes of thinking–one where you own things, and one where you just use them, and share them when they’re not in use. This is old news in the tech world, which has been dispersing monolithic dedicated channels into hordes of flexibly routed packets for decades…
Fibers always come in pairs. This practice seems obvious to a telephony person, who is in the business of setting up symmetrical two-way circuits, but makes no particular sense to a hacker tourist who tends to think in terms of one-way packet transmission. The split between these two ways of thinking runs very deep and accounts for much tumult in the telecom world.
— Neal Stephenson, Mother Earth Mother Board, 1994
…but it’s enormously foreign and disruptive, verging on revolutionary, to most everyone else. (Indeed, a whole lot of people have probably just mistaken it for communism. It’s not.)
We’re getting pretty abstract here. Let me pick a particular example: this column by Casey B. Mulligan in the New York Times this week, which concludes that “driverless cars … will increase the number of vehicles on the road.”
It’s a fairly smart piece that suffers from what I call “unidimensional extrapolation,” and so misses effects like the trend I refer to above. Widespread use of driverless cars will inevitably lead to a sharp rise in ownerless cars. A major reason for owning a car is that you don’t need to go get one when you need one. Which sounds like a tautology today, but won’t when shared driverless cars will be able to zoom to your house on five minute’s notice when you need to go to the mall for an hour.
Ultimately, I’m confident that driverless cars will lead to much lower car ownership in urban areas; instead, large numbers of people will have fractional ownership of sizable pools of driverless vehicles, à la Berkshire Hathaway’s NetJets, and just summon them when they need them. This will codify and formalize the running cost of using a car…and since you won’t pay for them when you’re not using them, it in turn will lead to fewer cars on the road.
That’s just one example. More generally, I think it’s hard to deny that both industries (AWS, Foxconn, etc) and individuals (from AirBNB to Zipcar) are increasingly moving towards collective usage of large pools of widely accessible shared resources. Economies of scale as a service, as Aaron put it. So far the effects are limited to specific sectors and domains — but it’s only a matter of time before this wave of change reaches, and profoundly disturbs, entire industries hitherto untouched by its force.
Posted: April 20th, 2013 | Author: Jon Evans | Filed under: TechCrunch | Tags: OPINION | No Comments » | 0 views
How’s this for synchronicity: Google Glass started shipping on the same week that CISPA passed the House, 3DRobotics unveiled their new site, and 4chan and Reddit pored over surveillance photos trying to crowdsource the identity of the Boston bombers.
Cameras on phones. Cameras on drones. Cameras on glasses. Cameras atop stores, in ATMs, on the street, on lapels, up high in the sky. Modern cars log detailed data their manufacturers can access if they so desire. Oh, and “if you carry a phone, your location is being recorded every minute of every day.”
In 1999, Sun CEO Scott McNealy said: “You have zero privacy anyway. Get over it.” Sadly, that sounds more prophetic every week.
I’ve been arguing for years that “Soon enough, pseudonymity and anonymity will only exist online; in the real world…they’ll be more or less extinct.” The hunt for the Boston bombers is to the coming world of surveillance as a 1980s PC is to a modern server farm. Facial recognition, gait recognition, drones the size of dragonflies — all here already. Just imagine twenty years from now. Every step you take outside will automatically be tracked, indexed, and correlated to all of your previous activity ever.
One can reasonably dispute whether the collective crowdsourced 4chan/Reddit attempt to identify the Boston bomber was a good thing or not, and interesting people are engaged in both sides of just that argument –
– but to me, the important thing is the precedent it sets.
A lot of people (just read the comments on my last Google Glass post) are seriously squicked by the possibility of individual video surveillance, but are essentially OK with being watched by governments or corporations. I think that is an extremely wrong and dangerous attitude, because I believe one-way transparency will inevitably breed corruption and abuse.
I am not in favor of the death of personal privacy in public spaces. I just think it’s inevitable. Soon enough cameras and surveillance software will be ubiquitous. There are already terrified voices, eg Farhad Manjoo’s, crying for “installing surveillance cameras everywhere” on the eyebrow-raising grounds that “we’re already being watched—just not systematically”.
And that’s why–despite its potentially undesirable social side effects–I’m a cheerleader for Google Glass and its ilk. If transparency will be forced on us, then it needs to be two-way transparency. It’s a given that the strong and rich will be able to watch the weak and poor; we need to ensure that the converse is possible as well. We need to democratize surveillance, and Google Glass is the first of a new kind of tool which can help us do just that.
For instance, I’d like law enforcement, border patrol, the TSA, and other authorities to wear Glass-like cameras at all time, and for that video to be accessible by the public when the abuse of authority is alleged. Interestingly, there’s now some real data supporting that stance: “Even with only half of the 54 uniformed patrol officers wearing cameras at any given time, the department over all had an 88 percent decline in the number of complaints filed against officers.”
In the words of the ACLU:
We don’t like the networks of police-run video cameras that are being set up in an increasing number of cities. We don’t think the government should be watching over the population en masse. [but] When it comes to the citizenry watching the government, we like that.
Giving the public some access to police footage isn’t enough, though. We need the people to be able to watch and record their government, just as their government keeps them under constant surveillance. Unfortunately, that inevitably also means that individuals can and will frequently surveil and record each other. Which means bullying, stalking, trolling, and doxing on, well, almost a New York Post scale:
I’m not happy about any of this. But drastically increased surveillance in public places is inevitable. Sorry. It’s just going to be too cheap, too easy, too convenient, and too reassuring to too many. Two-way transparency, however, will be a huge battle. The powers that be have every incentive to foster a moral panic about the stalker evils of personal cameras like Google Glass, and crowdsourced surveillance like that of 4chan and Reddit.
Again, I don’t actually think either is necessarily desirable in and of themselves. But I fear that they’re the price we’ll have to pay to have a society relatively free of systematic hierarchical abuse of authority and power — because, more and more, we live in a world where privacy is power.
Image credit: Lingeswaran Marimuthukumar, Flickr.
Posted: April 13th, 2013 | Author: Jon Evans | Filed under: TechCrunch | Tags: OPINION | No Comments » | 0 views
A few months ago, while visiting a hacker friend’s magnificent new San Francisco loft, he gestured to a little alcove stuffed with server racks and said: “And over there are the Bitcoin mines.” I smiled and nodded, thinking, Oh, right, Bitcoin. Is that still a thing?
Andy, if you’re reading this, I apologize. Is it ever, and how. Over the last few weeks the hype around everyone’s favorite distributed cryptographic currency has gone insane. It’s a Ponzi scheme; no, it’s the first instance of the third era of currency; no, it will spiral up and down forever; no, it’s the new venture-capital frontier; no, it’s an existential threat to the modern state.
No, possibly, conceivably, maybe, and no. But: I realized this week that Bitcoin actually is a really big deal — in a way that’s been almost entirely obscured by all the hype.
A rare voice of reason this month came from Felix Salmon, who wrote (in a post marred by some remarkable ignorance; for instance, Facebook Credits ceased to be a $1 billion market when Facebook discontinued them almost a year ago):
A peer-to-peer payments system, allowing anybody on the internet to pay anybody else on the internet without having to sign up with some financial-services behemoth first, could revolutionize global commerce … Bitcoin isn’t the future. But it has helped to light the way ahead.
I mostly concur. Of course, I would, since I concluded exactly the same thing two years ago, when Bitcoin was at its previous hype peak. I went on then to speculate that its real future might be as a national currency in a nation like Zimbabwe previously scarred by hyperinflation.
…And I don’t know what I was thinking. Bitcoin’s true long-term value was staring me in the face, and I missed it. It wasn’t until I read this superb Nyaruka post on the subject that it hit me.
Almost everyone else writing about Bitcoin is doing so from the perspective of a First World citizen living in a nation with thriving electronic payment networks and a strong, easily traded currency. But that’s not the context where it really matters. Where Bitcoin matters, where it’s important, is the developing world.
Ever tried to exchange Colombian pesos in Guatemala, or Tanzanian shillings in Zambia? I have, and believe me, it’s a Kafkaesque nightmare. Now imagine living in the developing world and trying to sell goods or services internationally. Talk about a pain point. Until Bitcoin. To quote that Nyaruka post:
Someone in Rwanda that builds a compelling service can instantly start taking payments from the rest of the world, without asking for permission, without filling out any paperwork and with the same fee structure as the biggest retailers … So Bitcoin is exciting to me not so much because it is a new currency, but because it has the potential to be a globally recognized, yet completely decentralized, form of digital payment.
Of course unofficial distributed international payment networks are as old as the hills. Our own John Biggs points out that Bitcoin is in essence much like a modern day hawala network; but it is to hawala as PayPal is to money orders sent by Pony Express. No ID required, no setup costs, no nothing: just send and receive. Bitcoin is no threat to the modern nation-state…but it is conceivably an existential threat to PayPal.
However, it’s not without its flaws. For one thing, Bitcoin’s “block chain” — the record that verifies all transactions — could conceivably be forked, as happened due to a versioning bug back in March. That wasn’t a significant problem, but now that Bitcoin’s collective value has briefly hit 10 figures (although it might be back down to eight figures by tomorrow…) you have to wonder if someone might try a brute-force attack on it. “If a user controls the majority of computational power in the mining network, they can manipulate this to their advantage by creating two diverging chains,” to quote a Cornell writeup.
In other words, if a true computing megapower (say, Amazon, Apple, Google, or one of a handful of national governments) really wanted to break Bitcoin, they could. In fact I’ve seen speculation that anyone willing to splash out a few million dollars on custom hardware would probably be able to hijack the block chain.
Furthermore, it’s not really all that anonymous, which is a highly desirable feature in a digital currency; and worst of all, if the last few weeks have proved anything at all about Bitcoin, it’s that it’s ridiculously volatile… which is exactly what you don’t want in a payments mechanism.
So I believe it’s Bitcoin’s successors — whether that be Ripple/OpenCoin, or the anonymous Bitcoin bolt-on ZeroCoin, or something else still being dreamed up — that will truly change the world. But not the First World. We don’t much need Bitcoin and its descendants, at least not yet. In the developing world, though, crippled by weak currencies and byzantine payment infrastructures, a simple, seamless, frictionless, reliable international peer-to-peer payments system could be a huge, huge deal. But not until the volatility diminishes…which is to say, not until the hype here fades away. Here’s hoping that’s soon.
Posted: April 6th, 2013 | Author: Jon Evans | Filed under: TechCrunch | Tags: Foursquare, Groupon, OPINION | No Comments » | 0 views
This hasn’t been a great year for Foursquare. “Check-ins are no longer what they used to be,” as Ingrid Lunden observed last month. There seems to be a general consensus that “Foursquare keeps resembling Yelp more and more…” but that comparison isn’t necessarily flattering, especially since there’s little doubt that Yelp has much greater public mindshare.
Then former Square COO and current Khosla Ventures partner Keith Rabois attacked them publicly (click through for the article’s amusing corrections, if nothing else!) prompting some bizarre musing from Michael Lazerow on when it’s OK for someone like Rabois to bash a founder.
(My answer, for what it’s worth: whenever he freaking feels like it. He’s not the Pope. He’s not the President. He’s just a venture capitalist. If you’re worried about public criticism hurting a company, then it’s built on apparent rather than real value and it deserves all the criticism it can get.)
Crowley responded, gamely:
Great tone…but I don’t know about that content. So the mighty check-in was just what filters were for Instagram, a gateway drug, to soon be replaced by “the location layer for the Internet?” Uh-huh. You know what that sounds like to me, in the long run? A map. Like Harry Potter’s “Marauder’s Map,” to use Crowley’s own words.
This does not sound like wise strategy. To paraphrase Paul Graham’s on-stage Office Hours at the last-but-one TC Disrupt, “Competing with Google. That’s not so bad. But you’re competing with Google at something they’re actually good at.” And, oh yes, also competing with Apple, whose maps have been steadily improving since their first stumbling introduction. Meanwhile, Google casually rolls out insanely great maps features like ski trails or underwater Street Views several times a year.
Does anyone seriously think, in the long run, that Foursquare has a better chance of “becoming the location layer for the Internet” than both Google and Apple, both of whom clearly take mapping extremely seriously? Anyone? Anyone? Bueller?
Yes, their API is exceptionally useful. I should know: I’m at least three of those 40,000 developers. But a great API does not a successful company make, and there are plenty of competitors; Yelp, Google Places, Factual, etc, although Foursquare is admittedly the most developer-friendly.
So I hate to say it, but Keith Rabois was one thousand percent correct. With their tactics struggling in the face of better-established competitors like Yelp, and their strategy apparently consisting of plotting a course between Google and Apple’s Scylla and Charybdis in a leaky drifting raft…what’s poor flailing Foursquare to do?
Funny you should ask. I happen to have an answer. And it is this: merge with Groupon.
Wait, no, hear me out. I know what you’re thinking: bad idea, or worst idea ever? and/or listen, buddy, two dumbs don’t make a smart. Groupon of course just fired its CEO in the face of sagging, well, everything.
But think about it. What’s one thing Groupon has that almost nobody else does? An existing relationships with an enormous number of small businesses. Frequently awfully contentious relationships, granted, but relationships nonetheless. What’s one thing that really defines Foursquare? Not the “location layer,” but the check-in. What’s a business model that just might work? Have users check in with intent — “shopping for my mother” or “hungry for lunch” — and promptly get deluged by coupons from a panoply of nearby retailers, and then collect a percentage of those sales.
Which is of course something Foursquare kind of tries to do already. (As of eighteen months after I suggested it.) But for it to really work they need a huge critical mass of small businesses to participate. What’s one thing Groupon kind of managed to do with its bizarre, multi-billion-dollar rise and fall? Connect to just such a critical mass.
Call me crazy. Call me a fool. But I think if these two long-term losers got together, they just might turn into a winner. It’s a longshot, sure — but it doesn’t seem much longer to me than believing that either has much of a glorious future on their own.
Posted: March 30th, 2013 | Author: Jon Evans | Filed under: TechCrunch | Tags: OPINION | No Comments » | 0 views
So there’s this startup called SmogFarm, which does big-data sentiment analysis, “pulse of the planet” stuff. I spotted them last year, and now they’ve got an actual product with an actual business model up and running in private beta: KredStreet, “The Social Stock Trader Rankings,” which performs sentiment analysis on StockTwits data and a sampling of the Twitter firehose to determine traders’ overall bullish/bearish feeling. They also compare reality against past sentiment to score and rank traders based on their accuracy, which is more interesting.
It’s a first iteration, but it looks pretty nifty, and I like the idea of a ranking system wherein unknowns can leave high-profile loudmouths in their dust by virtue of simply being right more often. Even if I feel slightly uneasy when I imagine such a system being applied to, say, tech bloggers. Actually being held accountable for what I’ve written in the past? Doesn’t that just seem terribly wrong?
And of course it’s early days yet for companies like SmogFarm/KredStreet, and sentiment analysis, and natural language processing (such as that which powered Summly), and Palantir-style data mining. Just imagine what they’ll be able to do in five years. And when they turn all that big-iron, big-data searchlight power on, say, Facebook timelines — what won’t they be able to determine?
A few years ago the EFF discovered that something as simple as your browser settings make you a lot less anonymous online than you might believe. Last week a study found that “human mobility traces are highly unique,” and when polling allegedly anonymous cell-phone location data, “four spatio-temporal points are enough to uniquely identify 95% of the individuals.” Good software can mine a lot of meaning out of apparently sparse and empty data.
So just imagine what happens when next-generation language- and image-processing software, and then the generation after that, and the generation after that, is unleashed on your Facebook timeline. It seems very plausible that all those innocuous things you say, and how you say them, and the pictures you post, and the games you play, will subtly and invisibly add up to a terrifyingly accurate portrait of you, including any and/or all of the things about yourself that you never actually wanted to make public.
What’s worse is that it will be ridiculously easy. Would-be employers won’t have to scroll through your Facebook timeline themselves, they’ll just need to point their profiling software in your direction and 30 seconds later read its high-confidence predictions of your work habits, neuroses, personal failures, emotional instabilities, attitude towards authorities, and sexual proclivities, all expertly extrapolated from the tapestry of subtle-to-invisible nuances accumulated from all of your photos, comments, Likes, upvotes, etc.; all individually meaningless, but collectively highly illuminating. Individual profiling is a huge business just waiting to be tapped by ethically challenged startups.
(This could be mitigated somewhat if you were to keep all your activity friends-only, of course; but even then, every app or distant acquaintance you’re connected to will be able to learn more about you than you ever intended. And it’s easy to envision employers requesting that you connect to them on Facebook as part of the job-application process, and filtering out those who refuse…)
I can imagine what that kind of profiling software would have said about me, early in my career: Hopeless bibliophile. Afflicted with incurable wanderlust. Doesn’t like being told what to do. Extremely chancy hire: likely to quit any job after six months to travel or try to write the Great Canadian Novel. Which, er, would have been one thousand per cent true; but obviously I didn’t want my potential employers back then to know about it.
Doesn’t matter to me now, of course, now that I’ve mellowed out some and I’m pretty well-established. But when people who are still struggling discover that everything they do online says far more about themselves than they know, and will be ceaselessly stored, sifted, mined and measured…they’ll inevitably become a whole lot less forthright than they are today.
Most people already know not to publicize individual things that reflect badly on them; once they realize that the totality of what they post can have serious repercussions, too, they’ll clam up. In the end all public online activity will essentially become an endless ongoing job interview. Doesn’t that sound great?
You would think all this big-data artillery would be good news for Facebook, so that they can target their ads more effectively. But once everything you share is being watched, filtered, and graded by remorseless, relentless profiling software, you’ll inevitably begin to share far less. Sure, you can try to use pseudonyms…but screw up just once and they’ll be tied to your real identity forever.
“Zuckerberg’s Law” states that every year the amount of information shared by Internet users doubles. But KredStreet and the like are only the very beginning of what can be done with this kind of data analysis. It’s hard to imagine Zuckerberg’s Law marching on once people realize that everything they do online accumulates into data that reveals far more about them than they know, which can and will be used against them. Instead I can see Facebook slowly turning into a ghost town where everyone is always on their very best fake behavior.
Posted: March 23rd, 2013 | Author: Jon Evans | Filed under: TechCrunch | Tags: Ebooks, OPINION | No Comments » | 0 views
If you love books–heck, if you even like ‘em–run, don’t walk, and read this magnificent, magisterial essay by Richard Nash on their past, present and future. It’s long. Don’t be frightened. But even if the Internet has shredded your attention span, at least scroll down to its epic final paragraph. Go on. I’ll wait.
It’s been a rotten decade for book publishers, newspapers, and anyone else clinging to that 15th century technology called the printing press. Marc Andreessen has advised the mighty New York Times to “burn the boats” and shut down their presses. His partner Ben Horowitz claimed last year that “babies born today will probably never read anything in print.”
Meanwhile, Borders is dead, the tablet is killing the e-reader, and Barnes & Noble’s Nook has gone from investor darling to dead-weight albatross. The “Big Six” publishers may seem to be surviving nicely, but check out this graph:
If publishers are at war with Amazon, the undisputed king of e-commerce–and they certainly think they are–then that remarkable trend does not bode well for them.
Authors aren’t doing so well either. “For large parts of the year it only takes a few hundred copies a day, not a few thousand, to get to the top of Amazon’s daily charts.” “My novel shot to the top of the site’s bestseller list last summer. You won’t believe how little I got paid.”
Nowadays, when authors dream of financial success, they dream more of Hollywood or TV adaptations than slots on the New York Times bestseller list. Movies and television have held up remarkably well under the onslaught of the Internet, thanks largely to ever-more-lucrative foreign markets, while book publishing has quietly become far more hit-and-miss than Hollywood. Last year each book in E.L. James’ Fifty Shades Of Grey trilogy sold more than 15 million e-book copies. Only one other book, Gillian Flynn’s Gone Girl, broke a million. Which is more than it sold in print, incidentally.
For the last five years, in the face of this spreading transformation, the publishing industry has been caught in a tawdry and depressing spiral of denial and decay, constantly attempting to reject new media, new technologies, and new business models until they can fight back no more. (Disagree? Name some publisher-driven innovations.) Evan Hughes’ recent Wired piece is the latest in a long line of eulogies. If it seems incredibly musty and tired to you, you’re not alone. I’m faintly amazed that it was published in Wired in 2013; my own contribution to the genre dates back to 2007.
That’s why Nash’s essay is such a breath of revolutionary air. The publishing industry will never be the same, but why can’t it be better? Why can’t a whole new model of publishing be created, rather than this false dichotomy between “published” and “self-published”? So the king is dying; well, long live the king!
The Internet opens up new ways of connecting with readers that authors have never dreamed of before… and that publishers seem to barely even consider. Take Wattpad. A few months ago they put a couple of my novels up on their site for free. (They did ask permission, even though they didn’t need to; after the rights to the books in question had reverted from their initial publishers, I released them under a Creative Commons license.)
And I’m delighted that they did. A cool million chapter-views of my back-catalog hacker thriller Invisible Armies later, I know far more about how people read the book than I ever did before:
Hughes seems to be arguing that authors will choose to self-publish. Charles Stross disagrees:
Yes, I could do it. But it’d suck up a huge amount of time I would prefer to spend doing what I enjoy (writing) and force me to do stuff I do not enjoy (reading contracts, accounting, managing other people). The only sane way to do it would be to hire someone else to do all the boring crap on my behalf. And do you know what we call people who do that? We call them publishers.
Indeed. But wait: why do all of those people have to work under the same corporate aegis? Why can’t Stross hire a separate editor, copy editor, publisher and marketer? Why must their end-product be viewed as a thing that is complete and engraved in stone, rather than a living beast amenable to A/B testing and weeks-to-months of optimization, like a Broadway play in previews? If a book isn’t a sheaf of papers any more–and given that the bestselling e-books are now outselling the sheaves, it clearly isn’t–then what is it?
I’ll take a swing at that one: a book is a story told in the size and shape that fits most deeply and tightly into the human brain. Everyone keeps waiting for Amazon Singles and short stories to take off, and waiting, and waiting. But I believe novels will remain the dominant form of written storytelling so long as our brains remain substantially unchanged.
Maybe the existing system of publishers and booksellers will collapse. Maybe our collective ability to filter the good from the bad will be challenged. Maybe, as more and more books are written, and more and more made available for free, full-time authors will become an endangered species.
Doesn’t really matter. Books will remain, and because they’re books, because they’re that razor-barbed size and shape, they’ll remain a genuinely powerful and subversive medium. Richard Nash is right: whatever tidal wave of change comes next, whatever economic system or sociopolitical order, you can bet that books, in one form or another, will be at its disruptive heart.
Image credit: Booksplosion, by azrasta, on Flickr.
Posted: March 16th, 2013 | Author: Jon Evans | Filed under: TechCrunch | Tags: OPINION | No Comments » | 0 views
“First you see video. Then you wear video. Then you eat video. Then you be video.” — Pat Cadigan, Pretty Boy Crossover
Sheesh. A whole lot of people who presumably have never actually seen Google Glass in action appear to be really upset. “People who wear Google Glass in public are assholes,” says Gawker’s Adrian Chen. “You won’t know if you’re being recorded or not; and even if you do, you’ll have no way to stop it,” doom-cries Mark Hurst.
Seriously, people? Seriously? DARPA has built drone-mounted 1.8-gigapixel cameras that can recognize people waving from 15,000 feet. Gait recognition software is good enough that they probably don’t even need to see your face. Oh, yes, and they’re working on legions of drones the size of insects, too, while they’re at it. There’s already one closed-circuit camera for every 32 people in the United Kingdom. And the NSA is building a new 65-megawatt data center in Utah to parse this brave new world of big data.
Meanwhile, everywhere you go, hardware is getting faster, software is getting better, everything is being networked. We’re marching boldly into a panopticon future. I’ve been writing about this for years. And now, suddenly, you’re irate about the potential privacy repercussions of a few geeks bearing glasses? What is wrong with you people? Where have you been?
I think cameras on the glasses of random passersby are among the least of your privacy concerns. At least there’s a red LED that winks on when Google Glass is recording, so you’ll know that you’re suddenly starring in your interlocutor’s home video. As panopticons go, the Google Glass version is pretty mild-mannered and half-hearted. The recent spate of furious privacy concerns are enormously overwrought compared to how much we should be concerned about our governments.
But there’s something about being caught on video, not by some impersonal machine but by another human being, that sticks in people’s craws and makes them go irrationally berserk. If these were glasses that recorded audio and took still photos when the wearer double-blinked, would anyone be near as upset? Hell, no. But video is somehow primal; video hits us where we live. (That’s why it’s so insanely popular. Did you know that YouTube is arguably the world’s second most popular social network?)
To a limited extent I actually want Google Glass surveillance, in an uneasy Pandora’s-box kind of way. I want police officers, border guards, and other authorities to be required to wear them every moment that they’re on duty, and I want that data to be available to those who report police brutality or other abuses of authority. (I’ve been saying that for five years, ever since I was mugged at gunpoint in Mexico City. Pretty sure it would have made a big difference to, for instance, my friend Peter Watts.) I want street protestors to be videoing the authorities at all times. I do not trust the powers that be.
If pervasive, ubiquitous networked cameras ultimately make public privacy impossible, which seems likely, then at least we can balance the scales by ensuring that we have two-way transparency between the powerful and the powerless, rather than just a world where the former spy on the latter; and we can give people the tools required for online and/or personal privacy, such as pseudonyms and easy-to-use strong cryptography.
That’s not to say I’m feeling all Panglossian about Google Glass. (Panglassian? Sorry.) My concern is far more petty: it’s that other people’s videos are almost uniformly terrible.
I know a little about moving pictures. I’ve done camerawork for TV shows, just helped build a site that shows curated movies, and I take the odd pretty good photo, if I do say so myself. But video is hard. Much harder to do well than pictures, which anyone can get right now and again via trial and error. Take a look at Vine, or Takes: one reason they’re only a few seconds long is that, if they were any longer, almost all examples of the form would quickly be revealed as nearly unwatchable crap.
Don’t get me wrong, putting new tools in everyone’s hands, and making them easier, inevitably leads to some awesome outsider art, and that’s always been doubly true for video. Take my friend Count Jackula’s series of horror-movie reviews, for instance, which increasingly have become hilarious short films in their own right.
So let’s hope the next generation, born in video, will use it more fluently, and find ways to make use of the petabytes of data that Google Glass or its ilk will generate. And that’s “will” not “may.” Yes, it’s entirely possible that Google Glass is like Apple’s Newton, 10 years ahead of its time, but –
– something like it is coming, sooner or later, almost inevitably. We may ultimately need augmented reality glasses in order to filter out all the bad videos of other people’s mediocre augmented realities. Maybe that’s what Pat Cadigan meant by “then you eat video.” On my bad days I feel like we’re all about to drown in a sea of awful home movies, while being tracked by drone- and signpost-mounted surveillance cameras 24/7/365; like we’re all sleepwalking onwards into a really tacky dystopia. Brace yourselves.
Image credit: I for one welcome our insect-drone masters, by yours truly, on Flickr.