Word semantic network shaped as a cloud diagram

How do words work?

Words are bricks that our speech is built from. Every opus of literature, every speech of a politician. A word is sort of a LEGO piece: its shape and structure define its function, as well as its ability to interact with other pieces.

Most importantly, a word has semantic meaning – an association with certain object of thought or reality. The word porcupine is linked to the image of a large, needled rodent, the word hammer is associated with the nailing tool. Red paints everything with corresponding colour, and walking makes us imagine some legs stepping on some path.

Semantic meaning is not absolute: to some people, walking are legs stepping on concrete path, to some – stepping on forest path. To some red is brighter than to others. Some people, perhaps, have never properly seen a porcupine, and would imagine some indistinct needleball, while zoologists would imagine this animal down to every detail.

In other words, meaning is unique in every person, and every word is interpreted slightly different, based on individual experience.

Every person has his/her own language. I remember being asked how many dialects of Chinese are there. I answered: much more than there are Chinese people. Which is very true: every person speaks in a unique language. Even two neighbours speak in slightly different dialects.

Learn more about subjective interpretation here.

Next detail: every word bares a limited set of information. Say, a human. What kind of a human has painted in your head? A male or a female? And what’s its age? And if I say a cat, then… what breed, what colour? Perhaps, not even a  domestic cat…

As you can see, each word does not only bring its meaning into our heads, but forces us to make assumptions. Or, more specifically:

  1. Imagine objects inseparable from the one brought by a word. If one says a girl, then she is supposed to have hair. Some clothes. If one says jogging, then there’s supposed to be a stadium, or a path. Majority* of objects are unimaginable unless we imagine related objects along with them.
    *maybe, even all of them
  2. Wild-guess characteristics of an objects, on the “unless said otherwise” basis. When they say a tiger, I instantly imagine it orange-black. It was never stated, but I have to pick some colour to paint it in my head. On the other hand, in the next sentence they could tell me that it was actually an albino-white tiger. Diagnosis: I missed. It happens sometimes. Now I have to “re-paint” my tiger quickly so that I could follow the talk.

Those assumptions are individual – say, at cat many would imagine their own pet. But many assumptions are universal, not to say language-specific. Say, at fire only unusual person would imagine it green, or blue. Even though blue and green fire exists. In this case, the assumption of colour is hard-wired into the word.

The ability of words to bring up objects related to its meaning (girl brings up hair, clothes etc) mirrors the fact that all notions in the language form a web. We know that all girls have such properties as age, hair colour (unless hair equals to “null”) and height, mountains have height and geolocation. That’s what the conception of “semantic web” is based on, which is considered the ultimate key to AI. Objects and meanings cannot be understood in their entirety if separated from the global ocean of knowledge

That concludes the discussion of meaning. Now, second property of words – nodes.

Why do words need nodes? Simple: semantics alone are not enough to convey all subtleties of thoughts. For example, let’s talk about hatchets:

Originally, there was a word thing to call any item, then, there was a word hatchet – a more specific word. Then, an axe – even more specific word to call a large hatchet. Then, a waraxe to call an axe for battles. It is amazing how rich of words a language is. But a problem: I need to somehow call a “rusty axe used during the first kingdom war”. I could make a word up, say, call it a kindgomwaraxeruste. But no need to be a soothsayer to tell that this is a poor way to handle the issue. There were thousands of axes used during hundreds of wars – we can’t name them all.

That’s how words became combinable, or gained the ability to form subordinated clusters for naming new objects. And nodes are the ability of a word to join with particular words producing new meaning on particular basis.

Say, I has no nodes. “I” is just a word to refer a person. Now, another word. My. The same meaning (it talks about the speaker), but the phrase with just “my” word sounds… incomplete. My what? My name? Or my pet crocodile? It happens because my has an open node – it requires another word to join; not just any word, but some noun meaning something that can be “mine”. It spreads its branches around, looking for that word to grab. The semantic meaning of a word is also predestined: any word joined to my would be “possessed” by it. Hence, a node also has sort-of-a meaning.

Nodes duplicate semantic connections of a word. Say, a person referred to as I can also have some possessions. “My car” can also be “a car that I own”. But when they say I, we can perceive the word by itself. We don’t care about possessions of I, or we just assume them. But when they say my, we wait patiently till they tell us what is owned by my/I. Telling a word with a node, we not only help the listener to connect the words into a phrase properly. We are asking them to wait. “Don’t guess what is mine, I am about to tell you in the next word”. And if they don’t tell, we feel lied to, confused – like if the speaker has concealed something.

Now, allow me to introduce two structural categories of words:

  1. Nominatives. Those are words that have semantic meanings, but don’t have nodes. They can join with other words, but they don’t require them, and can exist by themselves. Since there is only one part of the speech in English, and it happens to be nominative, then the majority of English words are nominatives.
    Examples: apple, run, I, play, green and so on.
  2. Complexes. Those are words that have a semantic nucleus and nodes – one or several. Well, in English, it is complicated: there are so few complexes, unlike other European languages. Those primarily are possessive and indirect-case forms of pronouns, and perhaps past forms of verbs. All those words have a meaning, just like nominatives, but they require other words by their side to fill in the free nodes.
    Examples: my, him, drank, me и т.п.
    And another one:
  3. Incorporators. An arguably non-existent category that has nodes around an empty nucleus. I.e., incorporators have no meaning, and their sole purpose is to help other words to join properly. Articles, conjunctions, prepositions.
    Examples: the, in, after, near and so on.
    Why is it arguable? Because they perhaps have meanings, just very vague. Say, the has a meaning of “definedness”, and after corresponds to some abstract idea of the future, of following something else. It’s very possible that incorporators are merely a very indistinct subgroup of complexes.

You wondering how to tell a nominative from a complex with certainty? Well, you can notice that there always is a begging question that you can ask to a complex. Look:

  • My what?
  • Drank who and what?
  • Me who-did-what-to?

And nominatives… they are just nominatives.



Further reads

Previous: Define a Language
Next: Sentence is the Word’s Huge BRO
List of articles on linguistics

Leave a Reply

Notify of