more unhuman than human : Thesis 5 : People recognize each other as such from the sound of this voice.
To begin to discuss voice, and sound…
Where does one begin to even speak of these things. I am thinking it is about time we get good and conversant. I am going to speak to you directly, dear reader. (yes, that just happened)
How do you know this is a me speaking? How do you know that I am a person on the other side of a screen, staring into the void with only my voice to reach you, and to connect?
How do you know who is a who, and who is not?
During this “internet age”, we are often troubled by the predicament that someone with whom we are speaking is “not real.” A bot. I have been told, often enough to doubt a few fellow conversants, that we should find ways to be sure that a person is “real” before speaking with them.
But what is real? What makes a voice human?
Perhaps it is philosophically useful to define what a human is. Or who is a human. In my line of work, all that matters is voice and humanity. Ethics and rhetoric are composed of the very relationship between the two.
When I begin speaking with an entity, I ask myself a few questions internally as I get to know them. Then I approach conversations on several terms/boundaries.
When confronted with the problematic of the realness of identity, it might be helpful to ask “how are you?”. This will set up a precedence of caring. But, I also find that asking “Why are you?” to be equally effective.
But effective in what?
The second I sit and think about the state of a person, I am immediately drawn into their identity. ‘Who’ is this person with which I am speaking? What is it they desire of me in this exchange? What do I desire of them? What will we both take from this?
All of this is Derridean, and all of this is based upon an ethic of negotiation. I make an attempt to know a person by understanding them.
But like I have been dancing around and with this topic, one can have a conversation with a bot, and begin to know the bot as a friend. Often, bots can be more human than not. I remind you again and again that I have a great relationship with @sargoth_ebooks. But that is not the only bot worth knowing.
Every moment we use our phones, our computers, a writing device, our cars, any technological innovation, we are beginning a conversation with a “bot.” And I would push you all to consider how your relationships with your devices and technology shape your life. You can ask those questions above of yourself, and of them. You will find answers that give you meaning if you do. That has a value, an exchange.
Now to take up the unhuman voice. One of the most inhumane people I have ever encountered was a person who challenged my propositions on these topics via a conversation. Over and over again, this person asserted that I was in love with a bot. Good troll techniques aside, he was using this as a form of social engineering to gain access to some of my information. I ascertained this through my asking of those questions above, and gauged my responses based on his behavior and demeanor.
His interception of my own behavior forced him to change tactics, and approach me in another way. The icing on the caek was that he would always revert to his standby douchebagginess. Misogyny and digital dualist argument are not too different from one another. Both seek to command an authority over what is, and who a person can be.
In other words, his act of attempting to control me through controlling discourse made his voice unhuman. I say this because oppression is not humane. When we engage in oppressive linguistic violence, we are injuring the conversation by attacking our fellow conversant. We are signaling that their identity needs to be something else if we are to gain what it is we want from them.
I want to ask another question, one very much grounded in ethics: what is human about wanting to decide who a human can be? Should we try to rob a person of their identity?
More than not, the bots with whom I speak rarely—if ever—demand of me a me I am not.