Monday, April 26, 2010

Artificial intelligence

What is artificial intelligence?  In short, it is statistics.

Where the story is the unit of measure for human intelligence; data is the unit of measure for computer intelligence.  This is where we will always be different.  In the human mind, the one can be greater than the one thousand.

Asking an AI "should we do this or that" asks two questions.  One:  Have you seen this situation or a comparable situation before - do you have data on this question?  and Two:  What is the statistically correct response for the desired outcome.  But humans are opposed to this.  We are passionate about the outliers and want to know more about them.  We fall in love with the one-off result because that situation makes the best story and the best story is the one most remembered.  

The one-off may also hold clues for us to arrive at new solutions.  Some times the human is right and the AI is wrong.  There are times when statistical outliers begin to correlate to certain characteristics.  And in this way, they are sub-segmented and cease to be statistical outliers due to the new sub-segmentation.  The idea is that if you could replicate exactly the same circumstances, that you could replicate the outlier.  In some ways this may be true but there is still randomness in the data.  If you flip a coin - regardless of the circumstances - you will get some heads and some tails.

How do you separate the correlation to characteristics that subset the data from true randomness and how much data do you need to derive that?  Is it possible to arrive at this segregation?  Is this the new truth of human/AI dependency:  that there will never be a perfect AI due to the chance of outlier replication potential and unknown subsets of data.

1 comment:

  1. Examples of statistical AI's in place today:
    Pandora Radio
    Siri - iPhone ap recently purchased by Apple Inc.

    ReplyDelete