One of the things on everyone’s (well, maybe a few people) minds is algorithmic transparency. Once upon a time, algorithms would recommend books to buy or people to date. Now, they make all sorts of decisions – how much your insurance costs or whether to drive your car into a tree.
Algorithms are doing what people did, which isn’t a problem so much.
But it kind of is concerning when you consider:
Algorithms make mistakes much faster than humans do.
And algorithms think weirdly. Run image recognition programs and you’ll see results that don’t make any sense. They mistake couches for tigers or whatever – in ways no human would.
Also, when an algorithm fails, the only way you can tell is when something wrong happens. Then you need to somehow figure out what went wrong, because it’s not like the algorithm will tell you.
Unless you design it to, hence this algorithmic transparency movement.
I agree with this idea with technology. These machine-learning monsters are new toys for us to play with, so we need to know how they break.
But there’s a time to surrender to decision-making processes you don’t understand and can barely investigate.
Assuming you consider your unconscious intuition to be algorithmic, that is. I’ll leave that for philosopher mathematicians to decide.
I don’t know how most of my unconscious works. How could I, when it’s vastly… well, vaster than my conscious mind.
I have learned to trust it, though.
Part of that comes from me training it, and it training me.
Part comes from where all trust comes from – mutual acknowledgement and respect.
If your mind is a mystery to you – if you do thinks, think things, even eat things you’d rather not… and change seems almost impossible… well, it’s a matter of approaching the black box in your grey matter the right way.
A way that draws on principles from ancient religious rituals and cutting-edge neuroscience.
A way called self-hypnosis, which you can learn how to use here:
You must log in to post a comment.