Posted: May 23rd, 2013 | Author: Scott Merrill | Filed under: TechCrunch | No Comments » | 0 views
The Fedora Project has been supporting Raspberry Pi, the diminutive $35 computer, for some time. Today they’re making the Pidora “remix” of the core Fedora distribution available. Like the Raspbian distribution of Debian, Pidora is compiled specifically to take advantage of the hardware already built into the Raspberry Pi.
Pidora offers a couple of interesting little additions to your standard Fedora desktop experience. The reduced oomph of the RPi means that the full-blown GNOME desktop is replaced with the lighter-weight XFCE. Pidora also offers an easy-to-use headless mode for folks running without a monitor. If you attach speakers to your RPi, it’ll helpfully say out loud what it’s IP address is. Clever trick.
The Pidora build was performed at Seneca’s Centre for Development of Open Technology, where they’ve been working with Fedora ARMv5tel/armv7hl build farms for the last couple of years. That experience was directly responsible for Pidora, since the RPi uses the ARMv6 architecture with a dedicated FPU, which is not strictly part of the ARMv6 spec.
According to CDOT’s Chris Tyler, there were three main challenges to getting Pidora out the door:
- Ordering the build — sequencing the initial build of over 10K source packages that have complex and sometimes circular dependency chains can be challenging.
- ARMv6-specific issues — armv5 and armv7 are the most common targets for ARM builds. Some packages make incorrect assumption or are missing code for armv6.
- Native building — Fedora has a native-build philosophy, which requires that package builds be performed on a system capable of executing the compiled code.
Tyler shared some additional details of why Pidora is an interesting option for Raspberry Pi owners:
Pidora contains a number of Raspberry Pi-specific Python modules and native libraries, such as WiringPi, bcm2835, and python-rpi.gpio. The kernel is also compiled to expose the Raspberry Pi interfaces such as I2C, SPI, serial, and GPIO, and several of these can be accessed with /sys file interfaces (even from bash) without using any special libraries or modules. In addition, Pidora contains Raspberry Pi-specific utilities and libraries for access to the Broadcom Videocore IV GPU.
I’ve only just recently acquired an RPi, and last night I installed Pidora onto an SD card. I was off to the races with no trouble at all.
The Fedora folks have a long history of giving out USB sticks with Fedora pre-loaded. I suspect we’ll soon start seeing SD cards pre-loaded with Pidora being handed out at conferences and events.
Posted: May 6th, 2013 | Author: Scott Merrill | Filed under: TechCrunch | Tags: Cloud, Privacy | No Comments » | 0 views
An old saying states that “security is inversely proportional to convenience.” This explains the slow adoption of many important security technologies. HTTPS, the secure version of the HTTP protocol used to browse the world wide web, has been around for more than two decades, but it’s only been in the last couple of years that it has been enabled by default on many major websites.
Back when we sucked down email from our ISPs over POP3 connections, all your data was, literally, yours: it was under your control more often than it wasn’t. If someone wanted access to your data, they had to access (or attack) your computer. As more and more of today’s data lives “in the cloud”, security becomes more and more important. If someone wants to access your data, you might never know about it as the attacks (or subpoenas) would be executed against the various cloud services you use.
Unlike Dropbox and similar services, which make it clear that they can access your data if they need to do so, SpiderOak employees can’t even see the names of the files you upload. And yet, SpiderOak hasn’t enjoyed quite the same level of success as Dropbox, in part because the security implementation makes it a little harder to use.
SpiderOak has made some great strides in making a friendlier product for casual users. They’ve revamped the sign-up process to make it easier and less intimidating, without compromising security. And they’ve just unleashed their new Hive addition, which makes multi-device synchronization easier than ever.
Historically, SpiderOak required users to explicitly share specific folders with specific devices. That’s a great feature, allowing you to ensure that your personal stuff doesn’t ever get synchronized to a work laptop, for example. But not everyone wants to explicitly decide which data can reside on which devices. Hive, available now, provides a pre-configured folder that is automatically synchronized with all devices linked to your account. This brings more Dropbox-like functionality to SpiderOak users, allowing them to enjoy secure cloud-based storage without manually configuring every device.
As Dropbox’s success has made abundantly clear, though, file storage and synchronization is so last year. The new hotness is service integration and automation. Things like IFTTT and all the other automation built atop it are making Dropbox the filesystem of the Internet. SpiderOak wants to be the private filesystem of the Internet. In order to support a rich ecosystem of third-party applications while still enforcing a commitment to zero-knowledge privacy, SpiderOak is working on Crypton, “a framework for building cryptographically secure cloud applications.”
SpiderOak has a couple of other tricks up their sleeve, too. While Dropbox and its ilk are strictly hosted solutions, SpiderOak has worked with a number of different corporate clients to deploy zero-knowledge privacy behind those companies’ firewalls. For various government and military agencies, this kind of on-premise secure storage is a requirement that Dropbox can’t easily provide.
Finally, SpiderOak has a few PSAs about the distinction between security and privacy available at zeroknowledgeprivacy.org. “Why Privacy Matters” and “The Fine Print of Privacy” are easy to read primers on some of the issues surrounding privacy online today. Even if you’re happy with Dropbox — or any of the cloud services that are quickly becoming indispensable — it’s worth spending a few minutes to read these primers.
Posted: April 15th, 2013 | Author: Scott Merrill | Filed under: TechCrunch | No Comments » | 0 views
The Xen project celebrates its 10th anniversary this week. It’s also moving to a new home at The Linux Foundation as a Collaborative Project. Just like the Linux kernel, Xen enjoys contributions from a variety of different companies, so a vendor-neutral organization to host development and collaboration is a big win for the project.
Although KVM has garnered a lot of attention lately, Xen is still more widely deployed and used. After all, it serves as the underpinnings for all of Amazon Web Services’ EC2 virtualization. It’s also used by Cisco, Citrix, Google, and a host of other companies. Recent developments in Xen have come from organizations as diverse as the U.S. National Security Agency, SUSE Linux, Oracle, and Intel.
“The open source model is predicated upon freedom of choice, so supporting a range of open source virtualization platforms and facilitating collaboration across open source communities is a priority for The Linux Foundation,” wrote Jim Zemlin, executive director of the Linux Foundation, in a blog post. “The market has proven there is opportunity for more than one way to enable virtualization in Linux, and both KVM and Xen have their own merits for different use cases.”
Posted: April 15th, 2013 | Author: Scott Merrill | Filed under: TechCrunch | Tags: Enterprise | No Comments » | 0 views
The explosion of infrastructure-as-a-service and platform-as-a-service offerings has greatly expanded the ways in which hobbyists and professionals deploy web sites and web services. For about the same cost as cheapo shared hosting, you can get your own small virtual machine at any number of providers, allowing you to tweak the entire instance to just the way you want it. Such a VM is perfect for running TT-RSS, or a photoblog, or just learning the differences between Apache and nginx web servers. If infrastructure isn’t your thing, you can quickly deploy your code to various PaaS providers, often for free, and have as reliable of a site as what you’d get from shared hosting.
The great thing about PaaS hosts is that they’re mostly agnostic to the stuff running inside them. Some PaaS hosts may cater to specific languages, but generally any app or framework written in a supported language will work on your PaaS of choice. The bad thing about PaaS is that a lot of the underpinnings of “hosting” get abstracted away completely, and the various open source applications and frameworks haven’t quite caught up with this design paradigm quite yet.
Take, for example, the simple task of uploading a file to your typical open source CMS. The PaaS host likely doesn’t permit writing directly to your application’s execution space, so you’ll need to jump through some configuration hoops to point your CMS to wherever your provider wants you to write files. Depending on the app, this may or may not be easy to do. Because uploaded files live outside of your app’s version control, you need to take extra steps to properly backup uploaded files. In short, things can get complicated quickly.
I think the next step in PaaS evolution is going to be app-specific hosting providers. Such providers will combine specific infrastructure an application tweaks to provide a cohesive, compelling offering. One example of an app-specific PaaS provider I recently spoke to is Pantheon, a company that’s built a comprehensive Drupal hosting solution.
Pantheon isn’t cheap, but what they offer is considerably distinct from an application agnostic PaaS like Red Hat’s OpenShift. First, Pantheon has designed their infrastructure to provide “a service that could offer the best possible performance and features for Drupal.” This is enabled by a custom front-end routing solution, a custom DNS system, a custom nginx front-end, and a lot more. The engineering work is impressive, to say the least.
That solves a lot — but not all — of the performance issues; but performance isn’t the only problem to solve. As mentioned above, file uploads can be a real pain point. Pantheon tackled this with their own distributed filesystem, which they call Valhalla, which is mounted to individual instances using WebDAV. This is a very complicated solution, and not one that makes sense for a single Drupal site to orchestrate. But all of this custom complexity allows Pantheon to offer a remarkably distinct service in the world of Drupal hosting.
Pantheon’s latest product is an Enterprise offering, complete with dedicated concierge service to assist with all aspects of a site deployment. That kind of hand-holding helps ensure that a site rollout is a success. In addition, Pantheon spins up dedicated development and testing environments that exactly mirror the production site’s configuration. This is often a cause of problems in complex site development: the prod environment doesn’t exactly match the development environment, introducing all sorts of hard-to-diagnose problems. Other features of the Enterprise product include database replication and scaling and load testing.
Josh Koenig, co-founder of Pantheon, told me “We’ve spent much of last year in a kind of “beta” for this product with enterprise customers — figuring it out in a series of one-off engagements that have been progressively getting better and better. Now we have it. It’s a product. It’s done, and we’re shipping it to the world.”
Pantheon’s Enterprise product packs a lot of punch. The New Republic recently deployed their new site on Pantheon, and enjoyed over 100 million pageviews in the first 24 hours. Pantheon’s architecture allowed the site to handle the load without any special considerations. As part of the Concierge service included with the Enterprise product, Pantheon performed load testing to establish the resources necessary to support 500 concurrent users. The weekend the site went live Pantheon noticed that demand was exceeding the baseline expectation by about 40%. Within three minutes of identifying the spike in load, additional resources were allocated.
Not every site will need the architectural underpinnings that Pantheon offers, of course, so traditional shared hosting will continue to scratch some people’s itches. Pantheon is certainly on to something, though. I strongly suspect that more application-specific hosting solutions will come forward in 2013.
Posted: April 15th, 2013 | Author: Scott Merrill | Filed under: TechCrunch | No Comments » | 0 views
At the OpenStack Summit today, Red Hat announced RDO, “a freely available, community-supported distribution of OpenStack that runs on Red Hat Enterprise Linux, Fedora and their derivatives.” In essence, RDO will function for Red Hat OpenStack much like Fedora does for Red Hat Enterprise Linux: new features will land upstream, get integrated into RDO, and eventually make their way into the commercially supported offering.
From the press release:
RDO brings the core OpenStack components – Nova, Glance, Keystone, Cinder, Quantum, Swift and Horizon – as well as incubating projects Heat, for cloud application orchestration, and Ceilometer, for resource monitoring and metering. Installation is made easy with the Red Hat-developed installation tool, PackStack.
That last bit is interesting. OpenStack is a complex suite of tools, and the installation process is non-trivial. Any work to streamline that will reduce at least one barrier to success.
As for the name, RDO? It stands for “Red Hat Distribution of OpenStack.” Not quite as catchy as “Fedora,” but what can you do?
In order to make the adoption of Red Hat OpenStack as easy as possible, Red Hat also announced today the launch of an official Cloud Infrastructure Partner Program, “a multi-tiered program designed for third-party commercial companies that offer hardware, software and services for customers to implement cloud infrastructure solutions powered by Red Hat OpenStack.” Cisco, Intel, and Mirantis are all on board as early members of the Partner Program, along with 25 other companies.
Red Hat is the largest contributor of code to the latest release of OpenStack, and today’s announcements make it clear that OpenStack is a key part of Red Hat’s ongoing strategy.
Posted: April 2nd, 2013 | Author: Scott Merrill | Filed under: TechCrunch | No Comments » | 0 views
I’ve written a number of times about how ubiquitous Linux has become. It powers supercomputers and cell phones. It’s in automotive infotainment systems. It’s in medical equipment. It’s also now in firearms, thanks to the folks at Tracking Point.
Let me state, up front, that I am not a gun enthusiast. Although I’ve fired a few weapons through the years I’m not a hunter, and have never shot a living thing. Guns of any sort are an area of technology about which I’m largely ignorant. Any inaccuracies about Tracking Point’s products are entirely my fault. The reason I’m writing about this is because it’s an interesting way to use Linux and Free Software well outside the realm of enterprise computing, social networking, and the like.
Tracking Point was founded in 2009 by John McHale with the aim of creating a “precision guided firearm”, one that uses state of the art technology to enhance the long-range shooting experience. Accuracy is the obvious benefit from such improvements, but this brings with it a number of ancillary benefits to hunters. Improved accuracy leads to more “ethical kills,” whereby animal suffering is minimized.
According to the folks at Tracking Point, most hunters are comfortable making shots up to 200 to 300 yards. Tracking Point’s solution easily allows people to double — and sometimes triple — that range, with no additional training or effort.
The number of variables involved in making an accurate long-range shot are many and complicated. Wind speed, elevation, temperature, humidity, the curvature and rotation of the Earth, and more all factor in to where you need to aim in order to make an accurate shot. Tracking Point’s solution performs all of the necessary calculations for you and presents you with a firing solution automatically.
To make this work, Tracking Point sells a complete solution of rifle plus scope plus ammo. In order to properly calculate the best firing solution, the system needs to know what kind of rifle and round are being used.
The heart of the product lies in a Linux-powered rifle scope. This is not your typical glass scope. Instead, it’s a video recording system that runs the stream through an image processing engine and presents you with a heads-up display. On the rifle is a special button to “paint” a red dot onto your target. The image processing engine sees the dot and keeps it on your target, regardless of motion (your’s or the target’s). Squeeze the trigger to arm the rifle, and the HUD gives you an aiming reticule with a blue dot in the middle. You need to line up the target’s red dot with your HUD’s blue dot. When your HUD’s blue dot lines up correctly with the target’s red dot the rifle will fire. If the dots don’t line up, the rifle won’t fire. In essence, you can’t take a bad shot with this system.
The HUD and other user interface elements are all powered by a custom C++ application that renders to the framebuffer using OpenGL. This application is responsible for all the animations, reticules, range display, and other non-video output of the HUD. The video from the front of the scope is all handled by a custom GStreamer plugin. The whole scope runs a variant of the Ångström distribution of Linux atop a TI DaVinci 8148 processor.
All of that is amazing by itself, but Tracking Point didn’t stop there. They also bundled in a WiFi hotspot that allows the scope to stream video live to a connected smartphone or tablet. The suggested use cases for this functionality are quite interesting: instructors can literally see what a student sees through the scope, and can offer guidance on how to align their shot. Shooters can also record their shots for later review or sharing on social media sites.
Finally, the system keeps track of how many rounds it has fired. A gun’s performance characteristics change over time and through use, and the Tracking Point solution accounts for this.
Tracking Point’s offerings start at $17,500 for a complete kit of rifle and scope, plus 200 rounds of ammunition. They’re also throwing in an iPad mini so that you can enjoy their app with your new rifle. The price increases as you increase the maximum possible range. The top-of-the-line model, capable of precision shooting up to 1,200 yards away, will cost $22,500.
If you don’t want to part with that much money, you can try Tracking Point’s free iOS game Precision Hunter Lite.
What’s next for Tracking Point? Obviously military and government contracts are being explored. Advanced image processing capabilities are being explored. Imagine having the internal organs of your target overlaid on the video, so you can perfect that “ethical kill” shot? There’s also the possibility of scoring animals based on their physical characteristics: the targeting system could inform you before you shoot whether that’s a six point buck or just a four point.
There’s also work underway to automate the detection of wind speed. Currently Tracking Point requires the user to manually input wind speed, which the system then uses to calculate the best firing solution. Removing this manual step would go a long way toward automating the entire experience.
There’s no denying that Tracking Point represents a significant advancement to the capabilities of personal firearms. I’m more than a little ambivalent about the long-term ramifications of this kind of technology, though, given the continuing abuse of existing gun technology by crazy people in urban areas. The ability to stream and record scope video to a smartphone also makes me a little queasy when I think about how it might be mis-used.
But as with any advancement, the technology itself is neutral: it’s the application and use of that technology that may be good or bad. If you’re a big game hunter, Tracking Point is clearly a good thing for you.
Posted: March 27th, 2013 | Author: Scott Merrill | Filed under: TechCrunch | No Comments » | 0 views
The “free” in Free Software refers to “freedom”, rather than cost. It is largely a happy coincidence that Free Software is available gratis. Copyleft licensing certainly helps, but there’s no overarching reason that Free-as-in-Freedom software need not cost anything. As Free Software has evolved and matured over the years, several major developmental archetypes have emerged. There’s the hobbyist software, worked on here and there as free time permits by one or more developers. There’s the “this isn’t a competitive advantage” software written primarily by a single corporate entity and released to the public for any of a variety of reasons. There’s also foundation-backed software, like that produced by the Apache Software Foundation or the Document Foundation, that is financed by multiple independent contributors and stewarded by a representative body. And then there’s stuff like the Linux kernel itself, where a non-trivial number of people are paid by their employers to work on it full-time.
I’d wager that the bulk of Free Software is of the first sort: hobbyists looking to scratch an itch. Some of these hobbyists may be independently wealthy, and therefore able to work full-time on their projects; but most contributors to free software do so on the side, in between their other obligations. And let’s not forget all the people dabbling with code for their own personal edification, rather than trying to productize something.
It’s for this reason, I think, that Free Software often gets a bad rap in the court of public opinion. For every shining success like the Apache httpd or the LibreOffice suite or the Linux kernel, there are thousands of barely-adequate programs languishing at SourceForge and GitHub. Maybe they work well enough for their developers, who understand the various quirks and deficiencies, but they’re far from ready for prime time for “regular” users.
Historically, Free Software developers didn’t have much in the way of funding options if they wanted to move beyond the hobbyist phase. Developers could solicit sponsorship from business entities, but just like venture capital that might open a pandora’s box of expectations and obligations different from what the developers originally planned. Or developers could put up a Paypal tip jar and hope to offset some of their hosting and development costs. Most such tip jars remained depressingly empty.
In medieval times, artisans would seek wealthy patrons to support them in the pursuit of their work without all the bother of a day job. These patrons had varying motives for sponsoring artisans, but generally enjoyed the prestige associated with doing so. In the end, most everyone benefited from the arrangement: the artisans avoided starvation and got to produce their works; the reputations of the patrons grew, and the general public got to (eventually) enjoy the works produced.
The rise of crowdfunding sites like Kickstarter and Indigogo has brought forth a new kind of patronage for our modern era. No longer does one wealthy benefactor have to subsidize the life and work of artisans. Instead the funds — and associated risk — are distributed amongst multiple participants. The model has been working well for board games and movies and electronic doo-dads. It’ll work for Free Software, too.
Yorba, the company behind the Linux photo management application Shotwell, are dipping their toes into the crowdfunding pool to finance their next project. They’ve started an Indigogo project to collect funding to develop Geary, a “lightweight email program designed around conversations.”
Although some folks are perfectly content with web-based email, there are many who prefer a native desktop client. In this regard, Linux desktops have been sub-par. Mozilla’s Thunderbird seems adequate, but the folks at Yorba seem to think they can do a lot better. To make their dream a reality, they’re asking the global community of Linux users to collectively put up $100,000 USD. I asked Jim Nelson, Yorba’s executive director, how that money would spent.
“We plan on feeding and clothing three engineers with the money raised,” Nelson told me by email. “The money we raise will be used almost entirely for salary and our tax obligations.”
I was curious if Yorba had any concerns about some kind of “hostile takeover”, as might occur if all the financial backers of Geary started to try to influence development. Sure, it’s open source software and anyone can fork the code; but the relationship between “donor” and “sponsor” is nuanced. If a sponsor doesn’t like the work-in-progress, they ostensibly have a chance to make their feelings known before the work is complete.
Nelson isn’t too worried. “Any one patron, no matter how well-financed they are, are up against potentially hundreds of patrons whose sum contribution represents a large stake to contend with.” More importantly, Nelson observes that “crowdfunding is not a contract model — a single large donation is still a donation, and if the well-funded donor asked for something contrary to our long-term goals, we still have the freedom to say ‘no’ and stick with the goals we’ve laid out in our campaign.”
It’s not that I don’t worry about this. Any time money changes hands there are attendant risks involved. But crowdfunding represents a model of trust that gives us independence. We’re relying on our track record of past performance for people to see that we’re a good horse to bet on. The crowdfunding model strikes me as a far better situation than the alternative of seeking out corporate sponsorship, where we would have to place their priorities first, no matter our own personal vision or end-user commitments.
Geary isn’t the first Free Software project to try to use crowdfunding, and it certainly won’t be the last. I wish them success in their efforts, and I look forward to more Free Software developers being able to produce more and better Free Software solutions supported by the micro-patronage of crowdfunding.
Posted: March 11th, 2013 | Author: Scott Merrill | Filed under: TechCrunch | Tags: Europe | No Comments » | 0 views
There is no shortage of cloud-based file storage and synchronization solutions: Dropbox, Box.net, Ubuntu One, and on and on and on. Most offer pretty much the same things. A few niche players offer something special, like Spideroak‘s approach to encryption, or ownCloud‘s host-it-yourself solution. QloudSync puts forward two interesting differentiators: it’s powered by 100% renewable energy, and it’s hosted in Iceland.
From a feature perspective, QloudSync isn’t anything new. File storage and synchronization. Share links with others. Stream music and video. The client apps are open source, and built atop SparkleShare.
QloudSync runs on GreenQloud‘s ComputeQloud and StorageQloud, which offer API compatibility with Amazon EC2 and S3. What is different about GreenQloud’s offerings, though, are that they run on renewable energy and claim to be carbon neutral, without the use of emissions offsets of any kind. Users of GreenQloud’s services can easily share their carbon savings to the social media outlet of their choice.
We see a great opportunity in utilizing Iceland’s abundant 100% renewable geothermal and hydro energy infrastructure, naturally cool climate and strategic location as a means to clean up IT and greatly reduce the industry’s carbon footprint.
GreenQloud is also making a strong play for the fact that they’re hosted in Iceland. According to them, your data “is safe from SOPA, PIPA, ACTA, Patriot act because StorageQloud runs from data centers in Iceland.” This doesn’t strike me as strong reason to use GreenQloud by itself, but it may be one of several that makes them a more attractive option in the sea of similar products.
If you’re at SXSW, stop by booth #1326 in the convention center and say hello to them.
Posted: March 2nd, 2013 | Author: Scott Merrill | Filed under: TechCrunch | Tags: Mobile | No Comments » | 0 views
We wander the streets with tiny computers in our pockets and in our hands. We talk casually to these computers, just like Captains Kirk and Picard talked to the computers on their Enterprises. With the push of a button, our computers give us unprecedented access to the bulk of human knowledge. These computers sometimes talk back to us. But underneath all the noise and chatter of speech, the computers in our pockets communicate with one another in an endless stream of ones and zeroes. Packets whiz through the air, unseen, unappreciated.
Those invisible ones and zeroes floating through the air cost real money. Proletarians like you and I enjoy a small allotment of ones and zeroes that we’re allowed to send and receive. The robber barons who mediate our access to the bulk of human knowledge grow rich even as they reduce the quantity of ones and zeroes they permit us to send. The computers in our pockets yearn for more ones and zeroes, but we, like over-protective parents at a pizza party, cautiously step in to prevent a binge.
There are some, though, that seek to make it easier — and more affordable — to send ones and zeroes through the air. Karma offers a lilliputian device with simple, easy-to-understand pricing. There are no onerous contracts. You are not required to commit to exclusivity to Karma for several years, unlike what the robber barons demand of you.
The Karma device creates a WiFi hotspot that moves around with you, and connects your WiFi connected devices to the Internet. This is just like the tethering option available on your pocket computer; but Karma sends data through Clearwire’s cellular network. Use it at airports and hotels to avoid exorbitant access fees. Use it with your WiFi-only tablet while you’re riding a bus or a train.
The nifty thing about Karma is the notion of “social bandwidth”. It seems a little extravagant to have a device dedicated to getting your little tablet onto the Internet. The same access point could easily service multiple devices. And that’s just what Karma does: it creates a public WiFi hotspot, with your name right there in the SSID: “Scott’s Karma”. Complete strangers can connect to your hotspot, and the Karma service handles all the account creation and billing nonsense. You just say to the world “Hey, here’s a WiFi hotspot you can use” and you’re done.
When someone new starts using your Karma hotspot, they get 100 MB of free bandwidth to consume; no need to pay anything at all. You also get a bonus 100MB for sharing your connection. Early adopters of Karma can probably accumulate a substantial pool of megabytes to use. After your freeloading guests consume their 100MB, they can purchase additional megabytes at reasonable prices. There’s no need to for these folks to own their own karma device: they can just keep using whatever Karma hotspots may be nearby.
Users of Karma get a dashboard display from which they can review their data consumption, see who has connected to their hotspots lately, and buy additional data as needed. It’s all very easy to use.
Karma is not a perfect solution, though. You must have a Facebook account, which for some may reduce Karma’s utility to zero. Twice while testing Karma I had a real opportunity to offer connectivity to someone who needed it, and both times the offer went unfulfilled because the other person didn’t have a Facebook account.
The other strike against Karma is one of simple security consciousness. I think most people are aware of the dangers of connecting to unknown and untrusted wireless networks. Right now, Karma is brand new — it’s not a household name — so when someone sees “Scott’s Karma” in the list of nearby wireless networks, there’s nothing to really encourage them to connect to it. If Karma devices can proliferate, maybe this situation will change.
In the high-tech metropolis of Columbus, Ohio, the Karma device worked just fine, as long as I was outside. Standing at a bus stop on my morning commute, my transfer speeds were just fine. The device reported a 4G connection, and I certainly had 4G-ish speeds.
As soon as I walked into a building, though, the connection would immediately drop to 3G, if it remained connected at all. In most buildings, the connection light blinked on and off, forlornly looking for a signal. This may be due to the quality of the Clearwire network in Columbus. Or maybe all the lead paint blocked the signal. I don’t know.
Sitting in a coffee shop, I connected all of my devices to the Karma network at the same time: Samsung Galaxy S3, Nexus 7, and laptop. Running a speedtest on all three simultaneously produced very disappointing results:
During my tests, only one other person ever connected to “Scott’s Karma”, and that’s only because I asked my wife very nicely if she’d do so. No strangers connected, so I honestly can’t say how the device will operate in its intended use case.
In all other respects, the Karma was an absolute delight to use. It’s small enough to carry in your shirt pocket. I never completely depleted the battery, even after several continuous hours of use. The signal was strong enough for all the tasks I needed to perform while out and about. A little more than a week’s worth of daily commutes consumed only a couple hundred megs of data. I checked email with wild abandon, trounced friends in Words With Friends, and destroyed an impressive number of Resistance portals while playing Ingress.
If nothing else, Karma provides an inexpensive option for getting WiFi-only devices online in the absence of freely available WiFi. Quit paying the robber barons excessive fees for the privilege of tethering devices to your pocket computer. Bypass the hotel’s rip-off WiFi. Be a nice person and help others avoid rip-off WiFi.
update: I confused the device name with the domain name. The former is just Karma, while the latter is yourkarma.com. My apologies for any confusion.
Posted: February 25th, 2013 | Author: Scott Merrill | Filed under: TechCrunch | Tags: apps, Enterprise | No Comments » | 0 views
ownCloud is a free software suite, written in PHP, that provides file storage, synchronization, and sharing. It provides the same basic features of Dropbox or Box.net. It also provides a whole lot more.
ownCloud was started three years ago when Frank Karlitschek wanted a free software alternative to proprietary solutions. In the time since the project has attracted a dedicated group of core contributors, made several significant releases, and is available in 42 languages. It’s also spun off a commercial project to drive development of ownCloud for enterprise users.
The core ownCloud offering is file storage and synchronization. You also get optional contacts and calendar synchronization, if you want to use it. As an open source application, you can install it on any computer you control. This means you know how and where your data is stored, something which existing hosted solutions abstract away from you. Individuals and enterprises can install ownCloud on their own hardware, and define access policies according to their own needs.
I’ve been using ownCloud on my own for a couple of months now. My primary use is a backup for pictures taken from my phone. Just like Dropbox and Google+ and Facebook, the ownCloud mobile client can automatically upload pictures taken from your phone. I like this because not all of the photos I take with my phone are intended for public viewing, but I don’t want these photos to live only in my phone. Having backups automatically uploaded and stored at my house on media I can control gives me great peace of mind.
Interestingly, ownCloud can be connected to third-party storage like Dropbox or Google Drive or even an FTP server. These are read-write connections, allowing you to use third-party storage in whatever ways make sense for you. Maybe you want a local backup of your Dropbox data? Maybe you want a single interface to all your hosted storage? ownCloud lets you do it.
The commercial version of ownCloud is built atop the open source project, and includes features of interest to enterprise customers. Things like MS SQL and Oracle support, connections to enterprise groupware and directory services applications, and white-label mobile clients. The commercial version specifically targets organizations that require on-premise data storage and control.
The first beta release of ownCloud 5 was just announced, with a release candidate due in the next week or so. I spoke with Karlitschek about the upcoming release of the latest open source offering from the project. According to him, there are three major elements of this release: integration, performance, and usability.
The biggest visible change in ownCloud 5 is in the presentation. The interface has been completely redesigned to present a more streamlined, usable experience. More space is allocated to the display of your data, rather than the display of the ownCloud controls.
Karlitschek highlighted a new photo gallery included in ownCloud 5, including better sharing options. This isn’t anything revolutionary, but does keep ownCloud on equal footing with its proprietary competitors. Also included are updates to the contacts application, and the calendar. ownCloud also provides a video player application, a PDF viewer, and a whole lot more.
ownCloud administrators can connect an ownCloud installation to a variety of back-end account databases. These include UNIX user accounts, LDAP, and the built-in ownCloud account mechanism. The upcoming release of ownCloud 5 supports multiple simultaneous backend systems, allowing you to use both UNIX and LDAP systems at the same time for accounts, for example. This makes it easier to tie ownCloud into an existing infrastructure.
Users can also select a “display name” other than their account name. So where an LDAP user might have an account name of “cn=scott,ou=people,dc=techcrunch,dc=com”, that user could select a display name of “Scott Merrill”. This is a small touch, but goes a long way toward usability.
Under the hood, the file-caching mechanism employed by ownCloud has been revamped, and Karlitschek reports speed improvements of up to 500% in some circumstances. The caching changes reduce the number of round-trips to and from the server, so desktop sync clients and mobile clients should see noticeable improvements.
Another big new addition is a full-text search mechanism, powered by Lucene. This is something that ownCloud offers that the proprietary solutions don’t. The full-text search will work in the mobile clients, as well as the web interface, allowing you to find files based on their contents, not just their file names.
The current versions of ownCloud have file versioning, allowing you to track changes made to files. The upcoming ownCloud 5 will introduce a complete “trash bin” feature, allowing you to undelete files. Versioning plus undelete means that your data has multiple levels of safeguard against accidental removal.
I asked Karlitschek about any particular challenges specific to the development of ownCloud 5. Since ownCloud is intended to run on any major platform, he said that they ran into a particularly surprising problem when running an ownCloud server on a Windows host. It turned out that PHP was “interesting” with UTF8 filenames on Windows systems, and a large number of bugs were reported which all boiled down to this issue. Several days of troubleshooting led them to the root cause. The solution was to write a filesystem abstraction layer specifically for Windows. That kind of effort goes a long way toward ensuring that this open source application works on as many platforms as possible.
As with any open source project, it’s hard to know how many people are actually using it. Counting downloads doesn’t tell the full story. Karlitschek estimates that there are more than 800,000 active users of the ownCloud project. This number specifically does not count enterprise users who are purchasing the commercial version from ownCloud.com.
Karlitschek shared some interesting use cases for ownCloud with me. Some people aren’t interested in file synchronization, and are instead only using ownCloud for the contacts and calendar functions. If you don’t want Google or Facebook to know your every move, but you still need consolidated access to your schedule from multiple devices, ownCloud offers a great solution. Karlitschek also told me about a group using ownCloud as the foundation for an e-book library sharing solution. As ownCloud continues to mature, it will continue to be used as a platform for more interesting solutions.
ownCloud supports HTML5 applications, allowing you to add all sorts of additional functionality. The ownCloud app catalog has dozens of apps. This extensibility makes ownCloud so much more than just a Dropbox clone. Indeed, according to Karlitschek, there is no other open source solution providing what ownCloud does.
When I asked about the future of ownCloud, Karlitschek identified additional opportunities for integration: things like SharePoint, Atlassian products, and other hosted repositories of data. Karlitschek was adamant that ownCloud needs to integrate with all cloud services, since different users may be limited to using specific offerings. iOS users are tied pretty tightly to iCloud, and Android users are tied pretty tightly to Google Drive, etc. Existing proprietary solutions like Dropbox and Box.net offer limited freedom from platform lock-in, but they don’t go far enough.
Moreover, those proprietary solutions are driven by what their customers are willing to pay for. ownCloud, as an open source solution, is free to pursue solutions that don’t provide specific economic benefit to their maintainers, but rather solve the real needs of its users.
ownCloud 5 promises some major new features and some much needed improvements to an already impressive product. As an open source application, if it doesn’t scratch your itch you are invited to get involved to help make it better for your own needs. Whether that’s submitting bug fixes, helping to run tests, or translating ownCloud to its 43rd supported language, all contributions are welcome.