The codification of jokes could be done in many ways.
One may code them according to theme (e.g. a joke about frogs/tables/men etc), pattern (e.g. A man walks into a bar), or – as we will do here – by their structure.
This does, of course, serve our purpose directly as we move across different types of structure:
- This to That
- Meta
- Intertextual
- Meta Intertextual, and
- Self-referential
For ease of ‘grasp’ I’ve removed any text and simply numbered them below:
1. This to That
2. Meta
3. Intertextual
4. Meta-intertextual
5. Self-referential
In a way, this is our shorthand – with an aim of ‘seeing’ or spotting jokes that are not of the ‘This to That’ type.
Why do you want to spot them? Well, when you do, you will see something that is deeper than your ‘normal’ joke – this cleverness is the inherent depth that your mind recognises the moment it ‘gets the joke’. It is this moment that offers you some idea how the mind, especially for jokes that are meta, meta-intertextual, can unpack layers – and for self-referential jokes, how the mind responds to ‘loops’ and paradox. In other words, it is ‘that thing’ that flashes in your mind as you process the information that you can see in front of you – instead of it being hidden ‘inside’.
In terms of broader applications, maybe Ai can take each of these images and codify jokes based on their type? Eventually, if Ai were to have a mind, we would want to compare a human’s ability to quickly spot the type as well as show evidence of the internal experience to show ‘how it is known’ to be that type.