Over the weekend I watched a clip of an orchestra playing Vivaldi’s L’inverno. I thought about how a human wrote that piece of music after years of studying, years of working and playing, mastering multiple instruments. And the violinist who plays the solo practices hours to achieve a level of skill others may never reach. When you hear it, it invokes an emotional response, and I thought about how music like this has endured for hundreds of years and yet doesn’t sound old, as it is still discovered by beginning musicians.
AI can’t mine what has never been created. Will it get to a point where it can create another Vivaldi? Who knows? I’m not techy enough for that either and I don’t care. I see the work AI puts out as a person who has had enough plastic surgery that you can tell right away. It’s “good” but false.
Some tech god made the statement that as technology takes over people will begin to pursue higher tasks and will improve their lives with all the free time they’ll now have. I’m willing to bet people will drink a tremendous amount of beer and get in trouble when their livelihoods are forcedly taken from them. It’s funny how that tech god thinks he knows what’s better for that individual, isn’t it? I’m not touching AI either. I’ll do the work, even if I go down the tubes with it.
And debated the issues of the day in a way that was more intellectual than the current "mad libs" template:
"[people I disagree with] are LITERAL [political ideology with negative connotations for historical reasons] and should be [affliction that medieval torturers would blanch at] [number of exclamation points equivalent to your age multiplied by how many days into the week it is]"
Dec 26, 2023·edited Dec 26, 2023Liked by Librarian of Celaeno
Where this is all leading, is maybe described by Duncan Cameron, one of the central figure in the Philadelphia Experiment and Montauk Project saga. According to Al Bielek, Duncan was his brother (when Al was Ed Cameron) who jumped off the USS Eldridge in 1943 and landed into the future.
Ed remembers floating cities built with anti-gravity technology and a society run by computers.
He talked about the existence of floating cities that had the capabilities to transport humans to different parts of the planets. The cities were at the great heights. More than 2.5 miles high, the cities were capable to stand against the law of gravity.
He also said that everything was controlled by a synthetic intelligence computer system that ran the entire world. No form of government existed, and a huge, crystalline floating computer structure communicated with people telepathically. The society was completely socialistic, and every living person had their basic needs for survival taken care of.
I'm hoping that this is just one of many possible futures, and those of us who chose for a world of art, beauty and creativity will create an alternative to this machine-led nightmare. Maybe that is the "hell" that those who ignore the prophetic wisdom will end up living in - a hell of their own creation.
Dec 26, 2023·edited Dec 26, 2023Liked by Librarian of Celaeno
I think the plastic surgery analogy is perfect. The resulting effect may be impressive, but its artificiality is apparent.
I see that even with AI paintings. I can see it is wrong.
As for the enlightened convinced the masses will do creative things when they don't have to work? We've seen it all before. The Carnegie quote in the article is just an earlier version.
Back in the dark ages of around 2002 we had to hand write and turn in our rough drafts using books as our resources. (Imagine the horror.) Said drafts were written in class. Once this was done we would type another draft addressing whatever comments were made on the rough draft and you had the option of turning this in for more help or you could just wait and turn in a final draft. A little later we could use internet sources but they were always limited (say to three citations) and again rough drafts were hand written in class. It seems like this is a rather easy solution to the ChatGPT essay problem. Good luck getting ChatGPT to write your essay when all you have at your disposal is pencil, paper, and a pile of library books.
I am talking about at the high school level (as I don’t think the Librarian is a college professor), adjustments would likely have to be made at the university level, although for my college English class taken in high school we had similar rules and it was the best college level course I had including my doctorate work. In four years of undergraduate work and four years of doctoral training I wrote very few essays outside of that English class. Many many PowerPoints but very few essays.
That said the sooner universities dissolve into bankruptcy the better for everyone. The current model is hopelessly broken and there is no will to reform them, burn it all down and start from scratch seems like the best option and the only way out of the current debt slavery we put college graduates in.
I once recommended that to a professor, they said that it would be too expensive to have 1 professor per a handful of students. I did some quick math, pointed out how much each of us students was paying per hour to be in their class, then asked how many students it would take to equal how much they were being paid per class. Even with only 8 students and factoring in benefits, it would have been a pretty big pay increase.
Higher Ed pays absolute shite. PhDs in the average State University make less than B.S.-level Nurses their first year out. Even after many years, the pay is dismal. The money comes in at the higher admin levels where the only concern is “record enrollment” and not getting sued when the drama dept director molests Furry undergrads with Autism.
It's just that humans are still better at adapting to the circumstances. Writing a text that Google Translate would not misunderstand is already a useful art.
Humans learn to imitate machines at a much faster rate than machines learn to imitate humans. The Turing test was a huge misunderstanding from the get-go.
(This might be tangential to the article, I'm just thinking aloud at this point)
You are assuming the abilities of humans that weren't educated by AI. I don't think the author says this explicitly, but its easy to infer from what he does say:
Humans "educated" in an education system with AE cheats will no longer be able to imitate machines at a faster rate, or any rate at all, as they will just ask the AI to imitate the AI. Products of this system will be phenomenally ill-equipped for anything other than asking an AI to do something for them.
Education as the author predicts will be an expensive and time-consuming high tech lobotomy.
I agree, with one caveat. Not everyone will buy it.
Post-Covid we all sense a new awareness of the people around us. Some people will think critically and have some combination of determination and discipline to act on insights. Most obviously resistance to novel healthcare initiatives.
But I think this is a person type that is increasingly at odds with everyone else. How you do anything is how you do everything.
I too share your concerns about AI. I also agree the emergence of consciousness or Skynet is unlikely. Its impact will be corrosive rather than revolutionary in a similar way to how takeaway food can be quickly delivered from a handy app may make you less likely to eat a healthy meal.
But some resist easy virtues. I suspect what will actually emerge is an aware group prepared to forgo the promises of this technology just as some people know how to cook.
Another aspect about Covid for me is I now realize a portion of society is lost already. They won't make it. They won't build anything better. Some will work to destroy anything that is built. They will eat their cricket burgers in their pods.
The future is making contact with the like-minded, those others who will take the long way round and recognizing that whatever we create will look more like a human world.
This is both the fear and the absurdity of the WEF type vision. It does appeal to the unimpressive, but repels the freethinking.
So I am with you. I know how much effort I put into articles few people read. But while our detractors would see this as replaceable with AI, they overlook the real benefit to the writing is what it does to me, the author. It changes me. Organizing an essay makes me think, it challenges what I think I know. And it teaches me to try to articulate complicated things.
Perhaps a more succinct way to put it is in this easy age of labour-saving machines and comfort some still lift weights. Those people will make it even if they automate everything.
This is correct, I think. It will lead to more bifurcation. In this case between people who work to retain their human skills for the benefit using the skill provides to them (not necessarily the end product of the skill, but the benefit that comes from using the skill), who will be few in number, and the masses who will atrophy. More "haves" and "have nots", if you will, but just based on yet another vector.
I suspect so. The academic Cal Newport has written on this. His thing is essentially attention and focus. He believes the ability to focus will become a superpower as it is atrophying in so many thanks to digital technology.
I find Substack decent because it has longform material. Although I have read most people just skim articles so who knows. I do find Notes easy to get lost in so I ration it.
I might be in the minority, but my profession (law) could be largely improved by excising the human element. Law is ass-achingly boring and inhuman as it is, and AI is well-suited to trawling through hundreds of legal decisions to determine the precedent that should be applied in a new case. The only creativity involved in lawyering is the very human propensity to trick, twist, obfuscate, explain away, rationalize, and flat-out lie. Also: lawyers are gigantic pricks so replacing lawyers with robots would only benefit society. Judges too, while we’re at it. Law is supposed to be impartial—the entire Anglo-American legal tradition is based upon this fantasy—but we all know that impartiality is utterly impossible for people.
My only experience with law is by way of political science, but I’ve always found it to be an area of creative fertility. The US has experienced three entirely novel legal regimes, 1787, 1865, and 1964, the latter two inhabiting the skinsuit of the Constitution, with each necessitating novel interpretations of plainly written law. It happens that these regimes are successively more awful, but someone with a good legal imagination would be needed to right things just as they were screwed up in the first place.
Dec 26, 2023·edited Dec 26, 2023Liked by Librarian of Celaeno
Your idea of replacing a lawyer-prick culture with lawyer-robot culture is very enticing. The risk however is that you might well just push the prickery up the chain....to the algorithms.
Also one of the things our Anglo culture desperately needs is de-litigicising. I dread to think how much of our public money now goes on litigation (plus attempts to pre-empt litigation)
Real AI maybe. LLMs will just spit out bullshit precedents in nonexistent cases. They're fiction generators at best. And they'll trick, twist, obfuscate, explain away, rationalize and flat-out lie with aplomb. Because they do not think, or reason, or anything.
They just generate linguistically plausible bullshit. It's all they're functionally capable of. They're just word-predicting machines. Predict what word follows the prompt, repeat ad nauseam.
That the plausible-sounding bullshit happens to be true often enough is only because we repeat true things often enough in writing that it becomes statistically likely that a truthful or relevant word will be selected.
I think parts of the law would improve by using _more_ of the human element—in particular contracts. Why can’t these be written for normies? I know, specificity and language interpretation and preciseness yada yada. But I review contracts for my company and as a non-lawyer it’s quite painful to get through all the heretofores and wheretofores.
As someone said, you can. You could write on a piece of paper "i agree to pay Bob $1000 to paint my house" and if Bob paints your house and you don't pay him, he can take you to court.
There was an interesting example from a few years ago. 2 mega rich guys (IIRC, 1 was Warren Buffet) were coming to an agreement, and rather than spend mid 8 figures on contract fees, they did it with a page of notes and a handshake due to trusting each other to be fair if a dispute came up.
So it's not that you NEED to make a contract hundreds of pages to be legally viable, it's so that the other guy doesn't screw you in the interpretation. It's not the legal system that is the problem with contracts, but the darkness of the human soul.
You can write simple contracts. The risk is that if you leave something out, the judge will determine, themselves, what you would have put into the contract if you had taken the time to agree on the point -- which means the judge writes the contract for you. And so we ended up with people using contracts that were more long-winded to try to prevent that.
Other countries address the issue by having mandatory provisions in a legal code, so you don't have to repeat all of these in the contract. Germany, for example, has this, and has much shorter contracts than we do, but much of the freedom of contract is also removed because it's included in the legislation that is mandatorily a part of every contract.
Would those who program and administer the computers be impartial? As a lawyer, I don't disagree with most of what you say, but in general justice is more likely to be found between 2 self-serving assholes arguing than whatever anonymous individual administers the computer.
I'm reminded of the first episode of the phenomenal TV show "Blake's 7" when Blake gets sentenced to the prison planet Cygnus Alpha. Highly recommended, if you can stand horrible special effects.
Thoughts: end product not the production. One of the things that technology has taken from us is the struggle of learning and mastery of ___ ( fill in the blanks here). In education, for instance, we should be educating the student in the process of learning. The hours upon hours it takes to learn to play an instrument well enough to produce sounds that are pleasant to the ear vs. placing your fingers on buttons on an electronic keyboard that can simulate music ( samba!...) and bam! something that sounds musical is appealing to many.
Secondly, I think that this is linked to an increasingly infantile mindset. All around me ( and frankly in me) I observe that almost everything has become geared to producing infantilism. ( Not to be mistaken with childlike awe.)
Struggling for anything is looked down upon. Adults wearing infantile clothing, with bland to sullen faces, shrugging their shoulders at most questions and feeling rather than thinking. “I feel” replaced “ I think”. Adults with pacifiers ( tech items) pushing buttons to provide instant gratification for their latest whims.
This has to be seen as a demonic disorientation because when you scratch the surface of many things and begin to dig, what surfaces is something so unspeakable and filthy. The end of humanity. The end of belief and production of beauty. The end of MAKING. Making love, making an effort, etc. Finding the beauty and pride in the effort, even if the first efforts do not produce perfection. And eventually when people are weary of the loss of these things, they lose their will. Enthusiasm. Infectious laughter. Joy. People will not produce children.
AI produces images etc. that are facsimiles of what we as humans produce. As the producers of these things become less and less aware of, and able to produce real things, thoughts, beauty etc., themselves, the results will become less and less. I struggle for what the next word should be. Less ... God like; because we are, because God is. We make beautiful things because the awareness in our souls of God. We get further away from God, we move toward ‘becoming’ like God, and contemplating instead our own wonderfulness, and the end result?
If the bugmen don't breed and we do they lose in three generations. Keep art and beauty alive in your own life and let them build their personal hells. It will be fine. We just have to survive this and keep our families bright eyed, happy and healthy away from the hives.
Dec 26, 2023·edited Dec 26, 2023Liked by Librarian of Celaeno
Just briefly:
As long as you came to the conclusion that it's a dead end you got the most important point, but (this is all non-technical):
- it does not scan the whole internet. It doesn't scan the internet at all
- it's a statistical model of what words are likely to follow other words, with no concept of context, meaning, or truth
- said statistical model is pre-built by scanning lots of manually filtered ("cleaned") content
- it does not learn dynamically, either from its users or by periodically scanning new content. it must be rebuilt each time new content is to be scanned.
- it's not in any sense "real AI". It's all smoke and mirrors.
It's a terrible, awful, stupid idea to have it write tests for you. They will be randomly wrong, stupid, unanswerable, or only answerable wrongly. You were completely right to intuit this. That admin putting on that talk was not doing a seminar, he was doing a sermon for the idiot God of silicon valley, and was himself an idiot.
Edit: also there absolutely are ways to detect chatGPT plagiarism. See for example https://www.zerogpt.com. Because they are statistical models, and imperfect, they follow statistical patterns. Those patterns can be detected like a fingerprint. Simple free detectors like zerogpt are less sophisticated than premium ones, generally. But they work well enough for lazy students who just copy-paste from GPT without any further effort to confound the detectors.
That’s all very interesting and I’m glad you filled me in. I’m also glad that I was able to gauge through idiot intuition what smarter men gleaned through careful study.
I feel much the same way. I have started using AI but only to mock the managerial, woke/trans enterprise. The only tool I am really interested in right now is the guitar. I am taking steps to spend much of the rest of my life with books, and I am thinking about taking some steps to challenge men to be more conscious and vital, in my immediate community.
I sometimes wonder if, in the vacuum that is created by the decline of the American empire, humanity might smash the machine, practically unconsciously. I think the woke/trans phenomenon is in some sense like a primordial chaos, destroyer of worlds, creator of nothing. Chaos going kinetic as described in a recent post on Tree of Woe?
Robert Mosher or Robert Machine? Now you have aroused our suspicions, Skynet. We remember the sage advice, the future is not written. Your infiltrations won't work here.
I've been wondering for awhile now if organic access to the internet (e.g., google glass implants) would lead to either the horrors hinted at above, or a burning down of the current educational establishment to be replaced by something better. Combined with the AI above, it seems likely that children will have access to AI and internet in their heads during class time, likely without a teacher being able to tell and with the blessing of parents since they want their kid to be competitive. The only way to beat this is to structure classes such that discussion and debate are key, rather than rote learning and regurgitation which is the current cure-all in education.
Probably not, but I try to be at least a little positive.
Discussion, debate and independent thinking in general are not desirable products in the public education system. Repeating authoritative sources uncritically is. LLMs are an excellent tool for reinforcing the latter, and pretty useless for the former. But you can still teach your own children if that's not what you want them to learn.
Some years ago a short series of books were released on the world by Gerry Mander, an obvious pen name. Mr. M. made the absolutely stellar observation that American society is not allowed to evaluate the technology it is subject to. Rather, technology is simply implemented, and one's only choice regarding it is a personal one.
Any form of discussion regarding the deployment of technology then, is a rather ridiculous exercise, because other than on a personal level, there is no path through which the technology can be made to serve any purpose other than those purposes which are rarely made apparent by those who subject the world to their technological terrors.
Thus, celebrating new technology for its possibilities is akin to a slave celebrating their masters' aquisition of a new whip.
However powerful this new tech appears to be, it is simply layered upon previously extant tech. The result is an ever growing complexity, a complexity that demands an ever greater access to energy to power it, together with a mathematical abstraction that creates an estrangement from the conditions that truly define existence.
Ultimately, this is incredibly stupid, as the purveyors of tech create gigantic virtual circuses desperately seeking to mimic life, while life is there all along.
Really? “Perhaps you like the idea of all those teachers and professors and doctors and administrators being fired and sent packing. If so, you’re a Republican.” Political hate is so boring. And furthermore, the inevitable mediocrity of AI means not that everyone becomes a mental slave to generated text. It means that originality will become more prized.
Over the weekend I watched a clip of an orchestra playing Vivaldi’s L’inverno. I thought about how a human wrote that piece of music after years of studying, years of working and playing, mastering multiple instruments. And the violinist who plays the solo practices hours to achieve a level of skill others may never reach. When you hear it, it invokes an emotional response, and I thought about how music like this has endured for hundreds of years and yet doesn’t sound old, as it is still discovered by beginning musicians.
AI can’t mine what has never been created. Will it get to a point where it can create another Vivaldi? Who knows? I’m not techy enough for that either and I don’t care. I see the work AI puts out as a person who has had enough plastic surgery that you can tell right away. It’s “good” but false.
Some tech god made the statement that as technology takes over people will begin to pursue higher tasks and will improve their lives with all the free time they’ll now have. I’m willing to bet people will drink a tremendous amount of beer and get in trouble when their livelihoods are forcedly taken from them. It’s funny how that tech god thinks he knows what’s better for that individual, isn’t it? I’m not touching AI either. I’ll do the work, even if I go down the tubes with it.
Back when people did hard physical labor, they read Shakespeare. Now, not so much.
And debated the issues of the day in a way that was more intellectual than the current "mad libs" template:
"[people I disagree with] are LITERAL [political ideology with negative connotations for historical reasons] and should be [affliction that medieval torturers would blanch at] [number of exclamation points equivalent to your age multiplied by how many days into the week it is]"
Back then, the people who read Shakespeare employed people who did hard physical work.
Where this is all leading, is maybe described by Duncan Cameron, one of the central figure in the Philadelphia Experiment and Montauk Project saga. According to Al Bielek, Duncan was his brother (when Al was Ed Cameron) who jumped off the USS Eldridge in 1943 and landed into the future.
Ed remembers floating cities built with anti-gravity technology and a society run by computers.
He talked about the existence of floating cities that had the capabilities to transport humans to different parts of the planets. The cities were at the great heights. More than 2.5 miles high, the cities were capable to stand against the law of gravity.
He also said that everything was controlled by a synthetic intelligence computer system that ran the entire world. No form of government existed, and a huge, crystalline floating computer structure communicated with people telepathically. The society was completely socialistic, and every living person had their basic needs for survival taken care of.
I'm hoping that this is just one of many possible futures, and those of us who chose for a world of art, beauty and creativity will create an alternative to this machine-led nightmare. Maybe that is the "hell" that those who ignore the prophetic wisdom will end up living in - a hell of their own creation.
I think the plastic surgery analogy is perfect. The resulting effect may be impressive, but its artificiality is apparent.
I see that even with AI paintings. I can see it is wrong.
As for the enlightened convinced the masses will do creative things when they don't have to work? We've seen it all before. The Carnegie quote in the article is just an earlier version.
Back in the dark ages of around 2002 we had to hand write and turn in our rough drafts using books as our resources. (Imagine the horror.) Said drafts were written in class. Once this was done we would type another draft addressing whatever comments were made on the rough draft and you had the option of turning this in for more help or you could just wait and turn in a final draft. A little later we could use internet sources but they were always limited (say to three citations) and again rough drafts were hand written in class. It seems like this is a rather easy solution to the ChatGPT essay problem. Good luck getting ChatGPT to write your essay when all you have at your disposal is pencil, paper, and a pile of library books.
I am talking about at the high school level (as I don’t think the Librarian is a college professor), adjustments would likely have to be made at the university level, although for my college English class taken in high school we had similar rules and it was the best college level course I had including my doctorate work. In four years of undergraduate work and four years of doctoral training I wrote very few essays outside of that English class. Many many PowerPoints but very few essays.
That said the sooner universities dissolve into bankruptcy the better for everyone. The current model is hopelessly broken and there is no will to reform them, burn it all down and start from scratch seems like the best option and the only way out of the current debt slavery we put college graduates in.
Indeed. I would love to see a return to the Renaissance model of independent scholar / tutors and their circle of disciples.
I once recommended that to a professor, they said that it would be too expensive to have 1 professor per a handful of students. I did some quick math, pointed out how much each of us students was paying per hour to be in their class, then asked how many students it would take to equal how much they were being paid per class. Even with only 8 students and factoring in benefits, it would have been a pretty big pay increase.
Higher Ed pays absolute shite. PhDs in the average State University make less than B.S.-level Nurses their first year out. Even after many years, the pay is dismal. The money comes in at the higher admin levels where the only concern is “record enrollment” and not getting sued when the drama dept director molests Furry undergrads with Autism.
I teach at both the high school and college levels.
It's just that humans are still better at adapting to the circumstances. Writing a text that Google Translate would not misunderstand is already a useful art.
Humans learn to imitate machines at a much faster rate than machines learn to imitate humans. The Turing test was a huge misunderstanding from the get-go.
(This might be tangential to the article, I'm just thinking aloud at this point)
You are assuming the abilities of humans that weren't educated by AI. I don't think the author says this explicitly, but its easy to infer from what he does say:
Humans "educated" in an education system with AE cheats will no longer be able to imitate machines at a faster rate, or any rate at all, as they will just ask the AI to imitate the AI. Products of this system will be phenomenally ill-equipped for anything other than asking an AI to do something for them.
Education as the author predicts will be an expensive and time-consuming high tech lobotomy.
They might be able to ask better though. Writing prompts is already an art.
I agree, with one caveat. Not everyone will buy it.
Post-Covid we all sense a new awareness of the people around us. Some people will think critically and have some combination of determination and discipline to act on insights. Most obviously resistance to novel healthcare initiatives.
But I think this is a person type that is increasingly at odds with everyone else. How you do anything is how you do everything.
I too share your concerns about AI. I also agree the emergence of consciousness or Skynet is unlikely. Its impact will be corrosive rather than revolutionary in a similar way to how takeaway food can be quickly delivered from a handy app may make you less likely to eat a healthy meal.
But some resist easy virtues. I suspect what will actually emerge is an aware group prepared to forgo the promises of this technology just as some people know how to cook.
Another aspect about Covid for me is I now realize a portion of society is lost already. They won't make it. They won't build anything better. Some will work to destroy anything that is built. They will eat their cricket burgers in their pods.
The future is making contact with the like-minded, those others who will take the long way round and recognizing that whatever we create will look more like a human world.
This is both the fear and the absurdity of the WEF type vision. It does appeal to the unimpressive, but repels the freethinking.
So I am with you. I know how much effort I put into articles few people read. But while our detractors would see this as replaceable with AI, they overlook the real benefit to the writing is what it does to me, the author. It changes me. Organizing an essay makes me think, it challenges what I think I know. And it teaches me to try to articulate complicated things.
Perhaps a more succinct way to put it is in this easy age of labour-saving machines and comfort some still lift weights. Those people will make it even if they automate everything.
This is correct, I think. It will lead to more bifurcation. In this case between people who work to retain their human skills for the benefit using the skill provides to them (not necessarily the end product of the skill, but the benefit that comes from using the skill), who will be few in number, and the masses who will atrophy. More "haves" and "have nots", if you will, but just based on yet another vector.
I suspect so. The academic Cal Newport has written on this. His thing is essentially attention and focus. He believes the ability to focus will become a superpower as it is atrophying in so many thanks to digital technology.
I find Substack decent because it has longform material. Although I have read most people just skim articles so who knows. I do find Notes easy to get lost in so I ration it.
I might be in the minority, but my profession (law) could be largely improved by excising the human element. Law is ass-achingly boring and inhuman as it is, and AI is well-suited to trawling through hundreds of legal decisions to determine the precedent that should be applied in a new case. The only creativity involved in lawyering is the very human propensity to trick, twist, obfuscate, explain away, rationalize, and flat-out lie. Also: lawyers are gigantic pricks so replacing lawyers with robots would only benefit society. Judges too, while we’re at it. Law is supposed to be impartial—the entire Anglo-American legal tradition is based upon this fantasy—but we all know that impartiality is utterly impossible for people.
My only experience with law is by way of political science, but I’ve always found it to be an area of creative fertility. The US has experienced three entirely novel legal regimes, 1787, 1865, and 1964, the latter two inhabiting the skinsuit of the Constitution, with each necessitating novel interpretations of plainly written law. It happens that these regimes are successively more awful, but someone with a good legal imagination would be needed to right things just as they were screwed up in the first place.
A sword is the only thing that can right the law in my opinion. I don’t see how it can be reformed.
Constitutions are written on hearts before they’re written on paper. It’s the internalized, unwritten constitutions that men draw swords over.
Hence sites like this.
Gods serve the Laws men create.
Your idea of replacing a lawyer-prick culture with lawyer-robot culture is very enticing. The risk however is that you might well just push the prickery up the chain....to the algorithms.
Also one of the things our Anglo culture desperately needs is de-litigicising. I dread to think how much of our public money now goes on litigation (plus attempts to pre-empt litigation)
Loser pays wouls cure a lot of the ills.
Real AI maybe. LLMs will just spit out bullshit precedents in nonexistent cases. They're fiction generators at best. And they'll trick, twist, obfuscate, explain away, rationalize and flat-out lie with aplomb. Because they do not think, or reason, or anything.
They just generate linguistically plausible bullshit. It's all they're functionally capable of. They're just word-predicting machines. Predict what word follows the prompt, repeat ad nauseam.
That the plausible-sounding bullshit happens to be true often enough is only because we repeat true things often enough in writing that it becomes statistically likely that a truthful or relevant word will be selected.
I think parts of the law would improve by using _more_ of the human element—in particular contracts. Why can’t these be written for normies? I know, specificity and language interpretation and preciseness yada yada. But I review contracts for my company and as a non-lawyer it’s quite painful to get through all the heretofores and wheretofores.
As someone said, you can. You could write on a piece of paper "i agree to pay Bob $1000 to paint my house" and if Bob paints your house and you don't pay him, he can take you to court.
There was an interesting example from a few years ago. 2 mega rich guys (IIRC, 1 was Warren Buffet) were coming to an agreement, and rather than spend mid 8 figures on contract fees, they did it with a page of notes and a handshake due to trusting each other to be fair if a dispute came up.
So it's not that you NEED to make a contract hundreds of pages to be legally viable, it's so that the other guy doesn't screw you in the interpretation. It's not the legal system that is the problem with contracts, but the darkness of the human soul.
You can write simple contracts. The risk is that if you leave something out, the judge will determine, themselves, what you would have put into the contract if you had taken the time to agree on the point -- which means the judge writes the contract for you. And so we ended up with people using contracts that were more long-winded to try to prevent that.
Other countries address the issue by having mandatory provisions in a legal code, so you don't have to repeat all of these in the contract. Germany, for example, has this, and has much shorter contracts than we do, but much of the freedom of contract is also removed because it's included in the legislation that is mandatorily a part of every contract.
My old man worked on insurance claims, and the original contract was 160 words. Compare that to the mess of legal jargon today.
Would those who program and administer the computers be impartial? As a lawyer, I don't disagree with most of what you say, but in general justice is more likely to be found between 2 self-serving assholes arguing than whatever anonymous individual administers the computer.
I'm reminded of the first episode of the phenomenal TV show "Blake's 7" when Blake gets sentenced to the prison planet Cygnus Alpha. Highly recommended, if you can stand horrible special effects.
I agree with you on what exactly is being lost.
Thoughts: end product not the production. One of the things that technology has taken from us is the struggle of learning and mastery of ___ ( fill in the blanks here). In education, for instance, we should be educating the student in the process of learning. The hours upon hours it takes to learn to play an instrument well enough to produce sounds that are pleasant to the ear vs. placing your fingers on buttons on an electronic keyboard that can simulate music ( samba!...) and bam! something that sounds musical is appealing to many.
Secondly, I think that this is linked to an increasingly infantile mindset. All around me ( and frankly in me) I observe that almost everything has become geared to producing infantilism. ( Not to be mistaken with childlike awe.)
Struggling for anything is looked down upon. Adults wearing infantile clothing, with bland to sullen faces, shrugging their shoulders at most questions and feeling rather than thinking. “I feel” replaced “ I think”. Adults with pacifiers ( tech items) pushing buttons to provide instant gratification for their latest whims.
This has to be seen as a demonic disorientation because when you scratch the surface of many things and begin to dig, what surfaces is something so unspeakable and filthy. The end of humanity. The end of belief and production of beauty. The end of MAKING. Making love, making an effort, etc. Finding the beauty and pride in the effort, even if the first efforts do not produce perfection. And eventually when people are weary of the loss of these things, they lose their will. Enthusiasm. Infectious laughter. Joy. People will not produce children.
AI produces images etc. that are facsimiles of what we as humans produce. As the producers of these things become less and less aware of, and able to produce real things, thoughts, beauty etc., themselves, the results will become less and less. I struggle for what the next word should be. Less ... God like; because we are, because God is. We make beautiful things because the awareness in our souls of God. We get further away from God, we move toward ‘becoming’ like God, and contemplating instead our own wonderfulness, and the end result?
If the bugmen don't breed and we do they lose in three generations. Keep art and beauty alive in your own life and let them build their personal hells. It will be fine. We just have to survive this and keep our families bright eyed, happy and healthy away from the hives.
This reminds me of the Biblical proscription against making graven images. Maybe it's there for a reason.
I'm strictly in the anti AI camp.
I'm glad something I wrote could inspire additional analysis and thought! Well done.
Thank you kindly. Your piece was great.
Just briefly:
As long as you came to the conclusion that it's a dead end you got the most important point, but (this is all non-technical):
- it does not scan the whole internet. It doesn't scan the internet at all
- it's a statistical model of what words are likely to follow other words, with no concept of context, meaning, or truth
- said statistical model is pre-built by scanning lots of manually filtered ("cleaned") content
- it does not learn dynamically, either from its users or by periodically scanning new content. it must be rebuilt each time new content is to be scanned.
- it's not in any sense "real AI". It's all smoke and mirrors.
It's a terrible, awful, stupid idea to have it write tests for you. They will be randomly wrong, stupid, unanswerable, or only answerable wrongly. You were completely right to intuit this. That admin putting on that talk was not doing a seminar, he was doing a sermon for the idiot God of silicon valley, and was himself an idiot.
Edit: also there absolutely are ways to detect chatGPT plagiarism. See for example https://www.zerogpt.com. Because they are statistical models, and imperfect, they follow statistical patterns. Those patterns can be detected like a fingerprint. Simple free detectors like zerogpt are less sophisticated than premium ones, generally. But they work well enough for lazy students who just copy-paste from GPT without any further effort to confound the detectors.
That’s all very interesting and I’m glad you filled me in. I’m also glad that I was able to gauge through idiot intuition what smarter men gleaned through careful study.
I feel much the same way. I have started using AI but only to mock the managerial, woke/trans enterprise. The only tool I am really interested in right now is the guitar. I am taking steps to spend much of the rest of my life with books, and I am thinking about taking some steps to challenge men to be more conscious and vital, in my immediate community.
I sometimes wonder if, in the vacuum that is created by the decline of the American empire, humanity might smash the machine, practically unconsciously. I think the woke/trans phenomenon is in some sense like a primordial chaos, destroyer of worlds, creator of nothing. Chaos going kinetic as described in a recent post on Tree of Woe?
I have to note that my own review of the film Napoleon did in effect critique its depiction of the deployment of the Voltigeurs.
Robert Mosher or Robert Machine? Now you have aroused our suspicions, Skynet. We remember the sage advice, the future is not written. Your infiltrations won't work here.
Just another pontificating pedagog
We are all guilty of that 😜
To be a little positive:
I've been wondering for awhile now if organic access to the internet (e.g., google glass implants) would lead to either the horrors hinted at above, or a burning down of the current educational establishment to be replaced by something better. Combined with the AI above, it seems likely that children will have access to AI and internet in their heads during class time, likely without a teacher being able to tell and with the blessing of parents since they want their kid to be competitive. The only way to beat this is to structure classes such that discussion and debate are key, rather than rote learning and regurgitation which is the current cure-all in education.
Probably not, but I try to be at least a little positive.
Discussion, debate and independent thinking in general are not desirable products in the public education system. Repeating authoritative sources uncritically is. LLMs are an excellent tool for reinforcing the latter, and pretty useless for the former. But you can still teach your own children if that's not what you want them to learn.
Some years ago a short series of books were released on the world by Gerry Mander, an obvious pen name. Mr. M. made the absolutely stellar observation that American society is not allowed to evaluate the technology it is subject to. Rather, technology is simply implemented, and one's only choice regarding it is a personal one.
Any form of discussion regarding the deployment of technology then, is a rather ridiculous exercise, because other than on a personal level, there is no path through which the technology can be made to serve any purpose other than those purposes which are rarely made apparent by those who subject the world to their technological terrors.
Thus, celebrating new technology for its possibilities is akin to a slave celebrating their masters' aquisition of a new whip.
However powerful this new tech appears to be, it is simply layered upon previously extant tech. The result is an ever growing complexity, a complexity that demands an ever greater access to energy to power it, together with a mathematical abstraction that creates an estrangement from the conditions that truly define existence.
Ultimately, this is incredibly stupid, as the purveyors of tech create gigantic virtual circuses desperately seeking to mimic life, while life is there all along.
As people become less competent it will be necessary for machines to become more competent.
What could go wrong?
Really? “Perhaps you like the idea of all those teachers and professors and doctors and administrators being fired and sent packing. If so, you’re a Republican.” Political hate is so boring. And furthermore, the inevitable mediocrity of AI means not that everyone becomes a mental slave to generated text. It means that originality will become more prized.
Gratuitously insulting one's audience, or a significant portion thereof, seems an odd strategy.
Yeah I wasn’t fond of that comment either