When I was too young to pay attention, relational databases transitioned
from an academic to an industrial technology. A few organizations ended up
making some high-performance engines, and the rest of us applied these
idiosyncratically to various problems. Now it looks like supervised
machine learning is undergoing a similar transition, where a few
organizations are making some high-performance implementations, and
the rest of us will leverage those implementations to solve problems.
Today's announcement of the general availability of Azure ML is one
step in this direction.
For other forms of machine learning, the end game is less clear. In
particular, consider adversarial problems such as filtering spam
emails, identifying bogus product reviews, or detecting
unauthorized data center intrusions. Is the best strategy for
(white hat) researchers to openly share techniques and tools?
On the one hand, it makes the good guys smarter; on the other hand,
it also informs the bad guys. The issues are similar to those
raised for biological research in the wake of 9/11, where
good arguments were made both for and against openness.
My prediction is inspired by the NSA and my own experience running
an email server in the 1990s. Regarding the former, what the NSA
did was hire a bunch of really smart people and then sequester them.
This gives the benefits of community (peer-review, collaboration,
etc.) while limiting the costs of disclosure. Regarding the latter,
I remember running my own email server became extremely inconvenient
as the arms race between spammers and defenders escalated. Eventually,
it was easier to defer my email needs to one of the major email providers.
Based upon this, I think there will only be a handful of datacenter
service (aka cloud computing) providers, because adversarial concerns will
become too complicated for all but the largest organizations. I think
this will primarily driven by organizations adopting the NSA strategy
of building walled communities of researchers, which provides increasing
returns to scale.
Here's a positive spin: as an entrepreneur, if you can identify an
adversarial problem developing in your business model (e.g., Yelp circa
2006 presumably discovered fake reviews were increasing), embrace it!
This can provide a defensive moat and/or improve your exit on acquisition.
No comments:
Post a Comment