Category Archives: Uncategorized

Investment Checklists?

One of my New Year’s resolutions this year was to read more “long-form” material.  After reading Nick Carr’s scary book The Shallows I became aware that my interest and ability to read anything longer than a paragraph or a screen was atrophying, so I resolved to get back in the saddle.

Which led to my reading another great book, Atul Gawande’s Checklist Manifesto.  If you haven’t read it, do so.  He shows that setting up a “checklist”-style process is essential to avoiding mistakes in areas as diverse as the surgical operating room, the cockpit of an airplane, and, yes, a VC firm.

For better or worse, the sponsor of a deal generally gets excited about it (if they don’t, it’s probably not a great deal!) and tends to, ahem, overlook certain shortcomings of the deal.  Having a checklist in place is a way to make sure that all the i’s are dotted and the t’s crossed.

Your thoughts?

Thinking about “Consumerization of IT”

I’ve been doing a lot of writing — or at least content generation (I don’t think PowerPoint really counts as writing) — about mobile lately, or about the explosion of “new clients”, or about the consumerization of IT.  I’ll share the stuff here as it comes out.

Unfortunately, none of these terms does any justice to what’s going on.  I think we’re witnessing the end of the PC/client-server/desktop web era and the beginning of a new era.

What marks the new era (in no particular order)?

  • Diversity of clients
  • Portability of clients
  • Mobilization of “real” computing
  • Beginnings of ubiquitous computing (a meme where you control all the computing resources available to you by carrying an identity/authorization around with you in a mobile client of some sort)
  • Cloud-ification of the back end.
  • Rise of the cloud service provider
  • End of the mechanical disk drive
  • Big Data-fueled applications
  • Video as the “new text”

Very interesting.  More later.

Why People Don’t Trust Companies (or at least don’t trust their publicity)

I was at an all-day conference on online PR.  I’m not a real PR person, but I drive Valhalla’s PR.  I also know what I don’t know, so I hoped I’d get something out of an all-day klatsch on measuring PR effectiveness online.

Good information, for the most part.

But the bloggable thing was the exercise we did in the early afternoon: each table in the big room had to handle a synthetic online publicity crisis.  A video was uploaded to YouTube showing child laborers in our (fictional) coffee plantations in Brazil.  Kids saying, “Oh, yeah, I don’t get injured most of the time.”  Stuff like that.

We had five minutes to react, and then found out that our own people said the facts of the video were probably authentic.  And then moms began to blog about us online…

I said to our table, “why not just tell the truth as we know it: yes, the footage is genuine; yes, this is a situation we’re going to get on top of; yes, we are acknowledging it.”

Everyone at the table was horrified: we couldn’t do that, it would “escalate” the crisis.

And no one else in the big room of 300 brought it up.  An acknowledgement wasn’t even on the table.

My wife tells me I’m nerdishly honest, and there’s something to that.  If someone had laid out a plan to acknowledge the damaging publicity in some face-saving way, it would have been an improvement on what I was suggesting.

But everyone’s response was to “keep it from spreading”, just the thing we had been told in a panel an hour before was the way _not_ to handle a crisis.

Oh, well.  I guess there are nuances to PR we amateurs don’t get.

Wisdom of Fights?

A lot of attention has been paid to the “wisdom of crowds”, with great discussion about whether, when, and how crowdsourcing gives accurate appraisals of situations.  We are the wiser for it.

But very little talk about another widespread belief, and perhaps a distinctively American one: I call it the “wisdom of fights”.

I thought of this earlier this week watching yet another discussion panel where the MC clearly believed his job was to get the panelists to start disagreeing with one another.

Why?  Is there some intrinsic virtue to disagreement?

It’s a widespread belief.  Our justice system believes that both defense and prosecution should unabashedly attack one another’s positions, with the clear implication that this process will surface everything a jury needs to reach a decisions.  The judge is required so the combatants will fight fair, but there’s no notion that the fighting itself is suboptimal.

Politics: the debate format has pretty much supplanted the speech format.  If we let Romney poke holes in Obama’s positions and Obama poke holes in Romney’s, we’ll know as much as if we had read through thoughtful presentations of each of their positions and then come to our own conclusions.

“Let’s you and him fight” is a very popular news format today, and most of the criticisms decry the lack of civility in the format, not the lack of veracity.

What makes science work is that both sides agree that a certain experiment will falsify a theory if it goes wrong.  Because the test is connected to the theory as a whole, something of significance takes place in the disagreement.  It’s profound disagreement.

So much of the “wisdom of fights” disagreement is shallow: it’s finding out that someone didn’t publish his tax returns, that someone won’t answer a certain question, that someone is vulnerable to a humiliating analogy or insult.  The disgreement isn’t under test in any way, except in the trivial sense that someone who stands up under repeated insult has some kind of staying power.

The wisdom of fights is very suspect.

Gordon’s Law

Some years ago, as a soon-to-be-ex-AI guy, I came to a realization that I immodestly named “Gordon’s Law”: it’s easier to program us to act like computers than it is to program computers to act like us.

If Gordon’s Law were not so, we would have voice recognition instead of “interactive” voice menus (“press 3 if you’ve despaired of leaving this menu”, etc.).  We would have automatic Root Cause Analysis rather than trouble ticketing systems.  We would have online advertising tailored to our wants and current projects rather than “personalization”.

To be sure, there is Watson, and there is Deep Blue, and my wife told me yesterday there’s some software competing for crossword puzzle champion of the world.  But in some sense — and I include Siri  here — these are parlor tricks.  As Joseph Weizenbaum found out years ago with the software psychotherapist Eliza, there are some clever ideas that simulate humans to humans.  They don’t wear well.  There’s talk of having Watson do medical diagnosis, but there’s also talk of people wanting to throw their iPhones out the window when it turns out Siri really doesn’t do a very good job at all of understanding what we want or what we want to know.  And if Watson ends of doing decent medical diagnosis, I’ll eat my hat.

Why should Gordon’s Law be true?  Aren’t our brains “just” meatware?  Isn’t everything, as Stephen Wolfram says, a computation?

I don’t know, but I do know that we work well together — information devices and humans — when we do what we’re each good at.  We don’t pretend to be machines and they don’t pretend to be humans.

The Metadata Problem

I am not a metadata expert.  I have a couple of friends who could run circles around me in terms of depth and breadth of their experience.  But I do have opinions.

I’ve always thought that the logical person to append metadata — the person who brings the metadata in — is also the least likely to person to know which metadata will be of interest.  Downstream, the consumers of data will have their separate — and diverse — metadata “agendas”, if you will.  The originator doesn’t know what those agendas are (and probably can’t know, since it changes over time).  And, of course, the consumers of data don’t know what metadata apply to a particular dataset without examining it.

In addition, the task of appending metadata is an add-on: it’s something extra you have to do.  What incentive does the originator of a dataset have to do this, other than charity?

Tagging systems like delicio.us have solved a part of this problem by a bottom-up system of tagging where metadata are tagged onto datasets retroactively by any user of the system.  These systems don’t satisfy metadata zealots because the vocabularies aren’t controlled, but, as the Wikipedia article on tagging says, things work out.  the vocabularies are usable and typically converge, or at least don’t diverge too badly.  The crowd is, if not wise, at least not clueless.

It would be even better if there weren’t a separate tagging operation at all.  In a no-tagging operation, some workflow that the user was going to do anyhow would implicitly add metadata.

Typical use case here: when a user drags an email to a “junk” or “spam” folder, the mail management systems can infer that the email can be tagged as junk or spam.

I struggle a lot to get proper metadata in my personal information cloud, by dragging emails to folders and tagging.  The payoff is that search works pretty well for me in tracking things down when I need to.

Your thoughts?

Connected TV

Reading a bunch about marrying Internet and traditional TV today, trying, among other things, to suss out how the ecosystem is going to develop.

One insight I had today: people will not prefer smart TVs, if they end up preferring them at all, because they’ve got one fewer box. The history of phones, smartphones, and now tablets shows that people pick their boxes because of functionality, not box count. People cheerfully carried around blackberries and dumb phones together for years, one for email, one for voice. Today people have a phone and a tablet and a laptop, all for slightly different use cases, each picked for excellence of function.

My guess would be people will do the same for TVs. We will cheerfully combine legacy set top box, new box, and maybe even smart TV, if each excels at some purpose we want.

Money is a vector, not a scalar

We were having a discussion about “throwing good money after bad” the other day, and I found myself blurting out “well, after all, money is a vector, not a scalar.”

I’m sure you all remember (from your linear algebra class, perhaps) the difference between a vector quantity and a scalar. A scalar quantity has a magnitude while a vector has a magnitude and a direction.

“Good” money and “bad”. What are these but an additional dimension for money? In a bad investment the quantity of the money grows while its goodness shrinks; in a good investment they grow together. Additional money in a bad investment grows smoothly in quantity but has a discontinuity as it leaps from bad to good.

There are lots of discussions about money that acknowledge its vector nature. “Dumb” money and “smart”. “Patient” money. The “velocity” of money. “Easy” money (large first derivative of money with respect to effort).

Maybe just a dumb metaphor. Your thoughts?