Semantics refers to the underlining meanings of speech, words, phrases, signs and symbols that represent them. Essentially, it is the study of the relationships of language, symbols, and meanings and how they interact with each other.
A great example of this is using literal versus slang versus sarcastic speech. When I tell you I’m going for a drive-by, you might think I’m in a gang and about to commit a heinous act. However, if you knew I was about to visit a friend’s new house, you’d understand the context as me going for a short drive to see their new place.
The heinous act would, of course, not be assumed in this case. But if you knew how much of a weirdo my buddy Rick is, you might second-guess yourself.
What is Semantic Memory?
When it comes to memory, there’s something called semantic memory. It’s a type of long-term memory that deals with facts and ideas that aren’t necessarily drawn from personal experience. This amazing concept was first theorized by the brilliant minds of W. Donaldson and Endel Tulving in 1972.
Speaking of Tulving, he describes semantic memory as a memory system that deals with words and verbal symbols, their meanings and referents, the relations between them, and the rules, formulas, or algorithms for influencing them. So, in short, semantic memory is all about the meanings of words and how they’re stored in our brains.
Semantic memory refers to the part of memory that holds the meaning of things we remember, rather than the specific details. This is in contrast to episodic memory, which holds the unique particulars of an experience.
Word meaning, in particular, is determined by the company they keep. The relationships between words create a semantic network, which can be analyzed by people to understand how they understand a given word. In these networks, we often see links like “part of” and “kind of” that help to define a word’s meaning.
But what about automated ontologies? In these cases, the links between words are computed vectors without explicit meaning. Exciting technologies like latent semantic indexing, support vector machines, natural language processing, neural networks, and predicate calculus are being developed to compute the meaning of words. It’s a fascinating area of research, and one that continues to evolve as we gain a better understanding of how our brains store and retrieve information.
Semantics and Prototype Theory
Prototype theory, which is related to semantic memory, suggests that we categorize things in our minds based on their typical or prototypical examples. For example, when we think of a bird, we might picture a robin or a sparrow, rather than a penguin or an ostrich. Those typical examples serve as our mental prototype for the category of “bird.”
This idea is related to semantic memory because it’s all about the meaning we attach to words and concepts. Our mental prototype for a bird is based on our semantic memory of what a bird is and what it looks like.
Prototype theory helps explain how we can easily recognize things that we’ve never seen before. We can quickly identify something as a bird even if it doesn’t perfectly fit our mental prototype, because it shares enough similarities with that prototype.
So, in short, semantic memory and prototype theory are both about how we store and use meaning in our minds. And by understanding these processes, we can better understand how our brains organize and make sense of the world around us.