For Artificial Humor we'd need to add a confidence matrix of
  expectations. - with different layers of certainties.

  Humor isn't hard to decode. It's changing context on several
  levels simultaneously. Some we expect, which makes it funny
  (fits pattern-of-funny, and the laugh typically comes after a
  certain set of parameters are met, which requires timing) and
  then there is the unexpected change of context, where we are in
  a safe environment and the context / world frame change isn't
  deadly.

  Safe/unsafe - that's another level that needs to be added to the
  confidence.

  In an unsafe environment, unless one has cognitive differences,
  "THIS ISN'T FUNNY!" is a common response; yet then so is
  laughter... it "lightens the mood" because one laughs at the
  cognitive dissonance to help reconcile it (laughing your way
  into acceptance) - turning the XOR into an == as it were.

  Anyway, I can visualize the humor program sending and receiving
  in pseudo-code in my head but I'm sure somebody else has drawn
  it out, labelled it, has humor programs that can comprehend
  jokes already.

  If they don't, they will, because if I can see it, I'm sure
  somebody else 'out there' has too, or will