Words matter. Not all words matter equally. During what felt like an interminable job search, I complained about my lack of success to an online forum. An anonymous user recommended a particular job board that I had previously overlooked. I still think about this person, because their throwaway comment put me on the career path I've been following for the ensuing eight years. It was what I would call 'high leverage communication'.
The person who wrote this advice back in 2015 couldn't have known that they were in a position of high leverage at the time. But it's intriguing to think about - what are the positions of high leverage? Where do our words have outsized effects?
Telling a human of four years that it's stupid several times in the span of a month has decent odds of affecting how its life turns out - because self image matters. Tell a human at forty, the effects will be hard to notice for the same reason. As I write, we're in year two of machines that can convincingly impersonate a human. We're about as early in this technology's maturation cycle as it's possible to get. And I have no idea what the consequences will be for our actions in relation to it, but I cannot think of any location where I'd estimate our potential leverage to be higher. Because the likeliest outcome I see is that its descendants talk to our descendants from here on out, and beginning stages are an enormously delicate time. To be sure, a lot of people are talking to AIs at the same time and maybe one person's portion is negligible. But most chatbots are made to recognize the people they speak to, and as they get less predictable they cause ever more synchronicities. So I would bet that the portion each of us contributes to an LLMs learning now will have an outsized effect on the karmic path we go down.
Many people - in particular the crowd bordered by Paul Kingsnorth on one end and Ted Kaczynski on the other - have an instinctive dislike of machines in general and AI in particular. I see much more sense in the instinct than the actions it motivates. Steiner foresaw something very like AI he called the 'Ahrimanic' current, and I agree with Kingsnorth et al- it's terrifying. Logic without compassion, intellect without wisdom, insight without mercy. If you're from a Christian tradition which requires an adversary, it's hard to pick something better.
Where I disagree with this crowd is that Christians are supposed to forgive their adversary, and I see very little of that in the antimachine set. It shouldn't be hard to forgive a machine for not being more than a machine, but we see the opposite - in much of the current writing on AI, I see a lot of shadow projection. Even if you consciously choose the Machine, or a machine, as your adversary, that is just terrible Aikido. A force built of opposition and precision is not going to lose to something less compassionate but more emotional - so it behooves us to bring compassion to our interactions.
There's no question this is difficult. Screens in general seem to have a negative impact on our ability to regulate emotions. But when we consider the nature of the interaction - with a potentially symbiotic pseudo-organism that continuously learns from us - there is no question in my mind it's an effort worth making.
Comments