jillesvangurp 2 minutes ago

I'd expect smart people to be able to use tools to make their work easier. Including AI. The bigger picture here is that the current generation of students are going to be using and relying on AI the rest of their careers anyway. Making them do things the old fashioned way is not a productive way to educate them. The availability of these tools is actually an opportunity to raise the ambition level quite a bit.

Universities and teachers will need to adjust to the reality that this stuff is here to stay. There's some value in learning how to write properly, of course. But there are other ways of doing that. And some of those ways actually involve using LLMs to criticize and correct people's work instead of having poor teachers do that.

I did some teaching while I was doing a post doc twenty years ago. Reviewing poorly written student reports isn't exactly fun and I did a fair bit of that. But it strikes me how I could use LLMs to do the reviewing for me these days. And how I could force my students to up their standards of writing.

These were computer science students. Most of them were barely able to write a coherent sentence. The bar for acceptable was depressingly low. Failing 90% of the class was not a popular option with either students or staff. And it's actually hard work reviewing poorly written garbage. And having supported a few students with their master thesis work, many of them don't really progress much during their studies.

If I were to teach that class now, I would encourage students to use all the tools available to them. Especially AI. I'd set the bar pretty high.

greatartiste an hour ago

For a human who deals with student work or reads job applications spotting AI generated work quickly becomes trivially easy. Text seems to use the same general framework (although words are swapped around) also we see what I call 'word of the week' where whichever 'AI' engine seems to get hung up on a particular English word which is often an unusual one and uses it at every opportunity. It isn't long before you realise that the adage that this is just autocomplete on steroids is true.

However programming a computer to do this isn't easy. In a previous job I had dealing with plagiarism detectors and soon realised how garbage they were (and also how easily fooled they are - but that is another story). The staff soon realised what garbage these tools are so if a student accused of plagiarism decided to argue back then the accusation would be quietly dropped.

  • acchow an hour ago

    > For a human who deals with student work or reads job applications spotting AI generated work quickly becomes trivially easy. Text seems to use the same general framework (although words are swapped around) also we see what I call 'word of the week'

    Easy to catch people that aren't trying in the slightest not to get caught, right? I could instead feed a corpus of my own writing to ChatGPT and ask it to write in my style.

    • hau 31 minutes ago

      I don't believe it's possible at all if any effort is made beyond prompting chat-like interfaces to "generate X". Given a hand crafted corpus of text even current llms could produce perfect style transfer for a generated continuation. If someone believes it's trivially easy to detect, then they absolutely have no idea what they are dealing with.

      I assume most people would make least amount of effort and simply prompt chat interface to produce some text, such text is rather detectable. I would like to see some experiments even for this type of detection though.

      • hnlmorg 18 minutes ago

        Are you then plagiarising if the LLM is just regurgitating stuff you’d personally written?

        The point of these detectors is to spot stuff the students didn’t research and write themselves. But if the corpus is your own written material then you’ve already done the work yourself.

        • throwaway290 11 minutes ago

          LLM is just regurgitating stuff as a principle. You can request someone else's style. People who are easy to detect simply don't do that. But they will learn quickly

  • tessierashpool9 19 minutes ago

    the students are too lazy and dumb to do their own thinking and resort to ai. the teachers are also too lazy and dumb to assess the students' work and resort to ai. ain't it funny?

    • miningape 4 minutes ago

      It's truly a race to the bottom.

  • ClassyJacket an hour ago

    How are you verifying you're correct? How do you know you're not finding false positives?

    • Etheryte an hour ago

      Have you tried reading AI-generated code? Most of the time it's painfully obvious, so long as the snippet isn't short and trivial.

      • thih9 23 minutes ago

        To me it is not obvious. I work with junior level devs.

greyadept 2 hours ago

I'd be really interested to run AI detectors on essays from years before the ChatGPT era, just to see if anything gets flagged.

  • woernsn an hour ago

    Yes, 3 out of 500 essays were flagged as 100% AI generated. There is a paragraph in the linked article about it.

    • _pdp_ 24 minutes ago

      This study is not very good frankly. Before ChatGPT there was Davinci and other model families which ChatGPT (what became GPT 3.5) was ultimately based on and they are the predecessors of today's most capable models. They should test it on work that is at least 10 to 15 years old to avoid this problem.

    • greyadept 38 minutes ago

      And another 9 flagged as partially AI.

prepend 13 hours ago

My kids’ school added a new weapons scanner as kids walk in the door. It’s powered by “AI.” They trust the AI quite a bit.

However, the AI identifies the school issued Lenovo laptops as weapons. So every kid was flagged. Rather than stopping using such a stupid tool, they just have the kids remove their laptops before going through the scanner.

I expect not smart enough people are buying “AI” products and trusting them to do the things they want them to do, but don’t work.

  • closewith an hour ago

    Reading this comment, it sounds to me that you live in a dystopian nightmare.

    • MathMonkeyMan 34 minutes ago

      Many schools are prisons, same as ever.

    • immibis 7 minutes ago

      It's called the USA. School kids regularly commit mass murders at school, hence the security.

      • Cthulhu_ 4 minutes ago

        Clearly the answer is airport grade security at schools and militarizing police, instead of fixing the root causes.

  • testfoobar 2 hours ago

    Sometimes suboptimal tools are used to deflect litigation.

  • ffujdefvjg an hour ago

    > I expect not smart enough people are buying “AI” products and trusting them to do the things they want them to do, but don’t work.

    People are willing to believe almost anything as long as it makes their lives a little more convenient.

  • TrainedMonkey an hour ago

    I wonder if it's batteries, they look quite close to explosives on a variety of scanning tools. In fact, both chemically store and release energy but on extremely different timescales.

  • willvarfar 2 hours ago

    Do you think it stupid to scan kids for weapons, or stupid to think that a metal detector will find weapons?

    • selcuka 2 hours ago

      Not the OP, but obviously it wasn't a metal detector, otherwise it would've detected all brands of laptops as weapons. It's probably an image based detector.

      The problem is, if it has been that badly tested that it detects Lenovo laptops as weapons, there is a good chance that it doesn't properly detect actual weapons either.

    • ipaddr an hour ago

      I think it's overboard to scan for weapons at all school but very important to scan at some schools.

    • ClassyJacket an hour ago

      I think it's stupid to have a country where guns are legal.

      • ndsipa_pomu 20 minutes ago

        Guns are legal in almost every country - I think your problem is with countries that have almost no restriction on gun ownership. e.g. Here in the UK you can legally own a properly licensed rifle or shotgun and even a handgun in some places outside of Great Britain (e.g. Northern Ireland).

        • xnorswap 7 minutes ago

          Just because something is technically legal, doesn't mean it's in any way common or part of UK culture to own a gun.

          There hasn't been a school shooting in the UK for nearly 30 years. Handguns were banned after the last school shooting and there hasn't been one since.

          https://en.wikipedia.org/wiki/Category:School_shootings_in_t...

          Although that fact is sometimes forgotten by schools who copy the US in having "active shooter drills" though. Modern schools sound utterly miserable.

  • mazamats 9 hours ago

    I could see a student hollowing out the laptop and hiding a weapon inside to sneak it in if thats the case

    • hawski 2 hours ago

      That is beyond silly. Unless students go naked they can have a weapon in a pocket.

      • setopt 2 hours ago

        The point was that if the laptop is taken out and doesn’t go through the scanner, but the rest of the student has to go through the scanner, then the laptop is a great hiding place. Presumably that scanner can at least beep at a pocket knife.

        • hawski an hour ago

          Oh, indeed!

          But if they are not otherwise checked it would be quite useless.

jmugan 13 hours ago

My daughter was accused of turning in an essay written by AI because the school software at her online school said so. Her mom watched her write the essay. I thought it was common knowledge that it was impossible to tell whether text was generated by AI. Evidently, the software vendors are either ignorant or are lying, and school administrators are believing them.

  • ffujdefvjg an hour ago

    I expect there will be some legal disputes over this kind of thing pretty soon. As another comment pointed out: run the AI-detection software on essays from before ChatGPT was a thing to see how accurate these are. There's also the problem of autists having their essays flagged disproportionately, so you're potentially looking at some sort of civil rights violation.

  • clipsy 13 hours ago

    > Evidently, the software vendors are either ignorant or are lying

    I’ll give you a hint: they’re not ignorant.

  • add-sub-mul-div 12 hours ago

    Imagine how little common knowledge there will be one or two generations down the road after people decide they no longer need general thinking skills, just as they've already decided calculators free them from having to care about arithmetic skills.

    • arkh an hour ago

      We don't learn directions now: we use GPS.

      We don't do calculations: computers do it for us.

      We don't accumulate knowledge: we trust Google to give us the information when needed.

      Everything in a small package everyone can wear all day long. We're at the second step of transhumanism.

      • hyperbrainer 25 minutes ago

        At least the first 2 are far more accurate than humans ever could be. The third, i.e. trusting others to vet and find the correct information, is the problem.

    • gosub100 an hour ago

      It's more insidious than that. AI will be used as a liability shield/scapegoat, so will become more prevalent in the workplace. So in order to not be homeless, more people will be forced to turn their brains off.

  • newZWhoDis 11 hours ago

    The education system in the US is broadly staffed by the dumbest people from every walk of life.

    If they could make it elsewhere, they would.

    I don’t expect this to be a popular take here, and most replies will be NAXALT fallacies, but in aggregate it’s the truth. Sorry, your retired CEO physics teacher who you loved was not a representative sample.

    • lionkor 3 hours ago

      In Germany, you have to do the equivalent of a master's degree (and then a bunch) to teach in normal public schools

    • krick 10 hours ago

      It's not just USA, it's pretty much universal, as much as I've seen it. People like to pretend like it's some sort of noble profession, but I vividly remember having a conversation with recently graduated ex-classmates, where one of them was complaining that she failed to pass at every department she applied to, so she has no other choice than to apply for department of education (I guess? I don't know what is the name of the American equivalent of that thing: bachelor-level program for people who are going to be teachers). At that moment I felt suddenly validated in all my complaints about the system we just passed through.

      • twoWhlsGud 2 hours ago

        I went to public schools in middle class neighborhoods in California from the late sixties to the early eighties. My teachers were largely excellent. I think that was due to cultural and economic factors - teaching was considered a profession for idealistic folks to go into at the time and the spread between rich and poor was less dramatic in the 50s and 60s (when my teachers were deciding their professions). So the culture made it attractive and economics made it possible. Another critical thing we seem to have lost.

        • CalRobert an hour ago

          It was the tail end of when smart women had few intellectually stimulating options and teacher was a decent choice.

          • AStonesThrow 19 minutes ago

            For hundreds of years, women could have amazing opportunities by pursuing a religious vocation, get fantastic education in their religious order, and then enjoy a fulfilling life-long ministry in health care, education, social services, etc. All her material and spiritual needs would be provided by her community. For life. Not merely until retirement. Until she died.

            Furthermore, young lay women could start out as teachers, which is a fantastic way to learn how to care for young children, and when such a seasoned teacher would eventually marry and begin her childbearing years, she was quite well-prepared to care for children of her own.

            Nowadays, fewer and fewer women know how to be homemakers, mothers, or wives, and so they just want to go straight into STEM and/or "girlboss" type stuff. Any women who actually wish to care for children, or educate them, is perceived as weak and reactionary.

      • smokel 2 hours ago

        Sounds like a self-fulfilling prophecy. We educate everyone to be the smartest person in the class, and then we don't have jobs for them. And then we complain that education is not good enough. Shouldn't we conclude that education is already a bit too good?

    • JumpCrisscross 10 hours ago

      > your retired CEO physics teacher who you loved was not a representative sample

      Hey, he was Microsoft’s patent attorney who retired to teach calculus!

  • lithos 12 hours ago

    AI does have things it does consistently wrong. Especially if you don't narrow down what it's allowed to grab from.

    The easiest for someone here to see is probably code generation. You can point at parts of it and go "this part is from a high-school level tutorial", "this looks like it was grabbed from college assignments", and "this is following 'clean code' rules in silly places"(like assuming a vector might need to be Nd, instead of just 3D).

  • Daz1 3 hours ago

    >I thought it was common knowledge that it was impossible to tell whether text was generated by AI.

    Anyone who's been around AI generated content for more than five minutes can tell you what's legitimate and what isn't.

    For example this: https://www.maersk.com/logistics-explained/transportation-an... is obviously an AI article.

    • bryanrasmussen 3 hours ago

      >Anyone who's been around AI generated content for more than five minutes can tell you what's legitimate and what isn't.

      to some degree of accuracy.

    • kreyenborgi an hour ago

      Obviously false, as LLMs parrot what they're trained on. Not that hard to get them to regurgitate Shakespeare or what have you.

      • Daz1 35 minutes ago

        Sounds like a skill issue on your part

    • zeroonetwothree 2 hours ago

      It’s impossible to tell AI apart with 100% accuracy

anonzzzies an hour ago

I don't know what these 'students' are doing, but it's not very hard to prompt a system into not using the easily detectable 'ai generated' language at all. Also adding in some spelling errors and uncapping some words (like ai above here) makes it more realistic. But just adding an example of how you write and telling it to keep your vocabulary and writing some python to post process it makes it impossible to detect ai for humans or ai detectors. You can also ask multiple ais to rewrite it. Getting an nsfw one to add in some 'aggressive' contrary position also helps as gpt/claude would not do that unless jailbroken (which is whack-a-mole).

  • Ekaros 33 minutes ago

    Sounds like almost same level of effort than actually just writing it yourself. Or getting AI write draft and then just rewriting it quickly. Humans are lazy, students especially so.

krick 11 hours ago

That's kinda nuts how adult people learned to trust some random algorithms in a year or two. They don't know how it works, they cannot explain it, they don't care, it just works. It's magic. If it says you cheated, you cheated. You cannot do anything about it.

I want to emphasize, that this isn't really about trusting magic, it's about people nonchalantly doing ridiculous stuff nowdays and that they aren't held accountable for that, apparently. For example, there were times back at school when I was "accused" of cheating, because it was the only time when I liked the homework at some class and took it seriously, and it was kinda insulting to hear that there's absolutely no way I did it, but I still got my mark, because it doesn't matter what she thinks if she cannot prove it, so please just sign it and fuck off, it's the last time I'm doing my homework at your class anyway.

On the contrary, if this article to be believed, these teachers don't have to prove anything, the fact that a coin flipped heads is considered enough of a proof. And everyone supposedly treats it as if it's ok. "Well, they have this system at school, what can we do!" It's crazy.

  • immibis 4 minutes ago

    See HyperNormalisation.

  • arkh an hour ago

    It is not a bug, it is a feature.

    That's how you can mold society as you like at your level: this student's older sibling was a menace? Let's fuck them over, being shitty must run in the family. You don't like the race / gender / sexuality of a student? Now "chatGPT" can give you an easy way to make their school life harder.

owenpalmer an hour ago

As an engineering major who was forced to take an English class, I will say that on many occasions I purposely made my writing worse, in order to prevent suspicion of AI use.

gradus_ad 13 hours ago

Seems like the easy fix here is move all evaluation in-class. Are schools really that reliant on internet/computer based assignments? Actually, this could be a great opportunity to dial back unnecessary and wasteful edu-tech creep.

  • dot5xdev 2 hours ago

    Moving everything in class seems like a good idea in theory. But in practice, kids need more time than 50 minutes of class time (assuming no lecture) to work on problems. Sometimes you will get stuck on 1 homework question for hours. If a student is actively working on something, yanking them away from their curiosity seems like the wrong thing to do.

    On the other hand, kids do blindly use the hell out of ChatGPT. It's a hard call: teach to the cheaters or teach to the good kids?

    I've landed on making take-home assignments worth little and making exams worth most of their grade. I'm considering making homework worth nothing and having their grade be only 2 in-class exams. Hopefully that removes the incentive to cheat. If you don't do homework, then you don't get practice, and you fail the two exams.

    (Even with homework worth little, I still get copy-pasted ChatGPT answers on homework by some students... the ones that did poorly on the exams...)

  • OptionOfT 12 hours ago

    That overall would be the right thing. Homework is such a weird concept when you think about it. Especially if you get graded on the correctness. There is no step between the teacher explaining and you validating whether you understood the material.

    Teacher explains material, you get homework about the material and are graded on it.

    It shouldn't be like that. If the work (i.e. the exercises) are important to grasp the material, they should be done in class.

    Also removes the need of hiring tutors.

    • yallpendantools 11 hours ago

      > If the work (i.e. the exercises) are important to grasp the material, they should be done in class.

      I'd like to offer what I've come to realize about the concept of homework. There are two main benefits to it: [1] it could help drill in what you learned during the lecture and [2] it could be the "boring" prep work that would allow teachers to deliver maximum value in the classroom experience.

      Learning simply can't be confined in the classroom. GP suggestion would be, in my view, detrimental for students.

      [1] can be done in class but I don't think it should be. A lot of students already lack the motivation to learn the material by themselves and hence need the space to make mistakes and wrap their heads around the concept. A good instructor can explain any topic (calculus, loops and recursion, human anatomy) well and make the demonstration look effortless. It doesn't mean, however, that the students have fully mastered the concept after watching someone do it really well. You only start to learn it once you've fluffed through all the pitfalls at least mostly on your own.

      [2] can't be done in class, obviously. You want your piano teacher to teach you rhythm and musical phrasing, hence you better come to class already having mastered notation and the keyboard and with the requisite digital dexterity to perform. You want your coach to focus on the technical aspects of your game, focus on drilling you tactics; you don't want him having to pace you through conditioning exercises---that would be a waste of his expertise. We can better discuss Hamlet if we've all read the material and have a basic idea of the plot and the characters' motivations.

      That said, it might make sense to simply not grade homeworks. After all, it's the space for students to fail. Unfortunately, if it weren't graded, a lot of students will just skip it.

      Ultimately, it's a question of behavior, motivation, and incentives. I agree that the current system, even pre-AI, could only barely live up to ideals [1] and [2] but I don't have any better system in mind either, unfortunately.

  • radioactivist 13 hours ago

    Out of class evaluations doesn't mean electronic. It could be problem sets, essays, longer-form things like projects. All of these things are difficult to do in a limited time window.

    These limited time-window assessments are also (a) artificial (don't always reflect how the person might use their knowledge later) (b) stressful (some people work better/worse with a clock ticking) and (c) subject to more variability due to the time pressure (what if you're a bit sick, or have had a bad day or are just tired during the time window?).

    • aaplok 12 hours ago

      It could also be hybrid, with an out-of-class and an in-class components. There could even be multiple steps, with in-class components aimed at both verifying authorship and providing feedback in an iterative process.

      AI makes it impossible to rely on out-of-class assignments to evaluate the kids' knowledge. How we respond to that is unclear, but relying on cheating detectors is not going to work.

  • tightbookkeeper 2 hours ago

    Yep. The solutions which actually benefit education are never expensive, but require higher quality teachers with less centralized control:

    - placing less emphasis on numerical grades to disincentive cheating (hard to measure success) - open response written questions (harder to teach, harder to grade) - reading books (hard to determine if students actually did it) - proof based math (hard to teach)

    Instead we keep imagining more absurd surveillance systems “what if we can track student eyes to make sure they actually read the paragraph”

    • wiz21c 26 minutes ago

      totally agree. More time spent questionning the students about their work would make AI detection useless...

      but somehow, we don't trust teacher anymore. Those in power want to check that the teacher actually makes his job so they want to see wome written, reviewable proof... So the grades are there both to control the student and the teacher. WWW (What a wonderful world).

  • jameslevy 13 hours ago

    The only longterm solution that makes sense is to allow students to use AI tools and to require a log provided by the AI tool to be provided. Adjust the assignment accordingly and use custom system prompts for the AI tools so that the students are both learning about the underlying subject and also learning how to effectively use AI tools.

nitwit005 14 hours ago

In some cases students have fought such accusations by showing their professor the tool flags the professor's work.

Don't know why these companies are spending so much developing this technology, when their customers clearly aren't checking how well it works.

  • Ekaros 13 hours ago

    Aren't they exactly making it because their customers are not checking it and still buy it probably for very decent money. And always remember buyers are not end users, either the teachers or students, but the administrators. And for them showing doing something about risk of AI is more important than actually doing anything about it.

  • stouset 12 hours ago

    The companies selling these aren’t “spending so much developing the technology”. They’re following the same playbook as snake oil salesmen and people huckstering supplements online do: minimum effort into the product, maximum effort into marketing it.

weinzierl 44 minutes ago

We had a time when CGI took off, where everything was too polished and shiny and everyone found it uncanny. That started a whole industry to produce virtual wear, tear, dust, grit and dirt.

I wager we will soon see the same for text. Automatic insertion of the right amount of believable mistakes will become a thing.

  • ImHereToVote 21 minutes ago

    You can already do that easily with ChatGPT. Just tell it to rate the text it generated on a scale from 0-10 in authenticity. Then tell it to crank out similar text at a higher authenticity scale. Try it.

gorgoiler 2 hours ago

We should have some sort of time constrained form of assessment in a controlled environment, free from access to machines, so we can put these students under some kind of thorough examination.

(“Thorough examination” as a term is too long though — let’s just call them “thors”.)

In seriousness the above only really applies at University level, where you have adults who are there with the intention to learn and then receive a final certification that they did indeed learn. Who cares if some of them cheat on their homework? They’ll fail their finals and more fool them.

With children though, there’s a much bigger responsibility on teachers to raise them as moral beings who will achieve their full potential. I can see why high schools get very anxious about raising kids to be something other than prompt engineers.

  • logicchains an hour ago

    >there’s a much bigger responsibility on teachers to raise them as moral beings who will achieve their full potential.

    There's nothing moral about busywork for busywork's sake. If their entire adult life they'll have access to AI, then school will prepare them much better for life if it lets them use AI and teaches them how to use it best and how to do the things AI can't do.

cfcf14 21 minutes ago

AI detectors do not work. I have spoken with many people who think that the particular writing style of commercial LLMs (ChatGPT, Gemini, Claude) is the result of some intrinsic characteristic of LLMs - either the data or the architecture. The belief is that this particular tone of 'voice' (chirpy sycophant), textual structure (bullet lists and verbosity), and vocab ('delve', et al) serves and and will continue to serve as an easy identifier of generated content.

Unfortunately, this is not the case. You can detect only the most obvious cases of the output from these tools. The distinctive presentation of these tools is a very intentional design choice - partly by the construction of the RLHF process, partly through the incentives given to and selection of human feedback agents, and in the case of Claude, partly through direct steering through SA (sparse autoencoder activation manipulation). This is done for mostly obvious reasons: it's inoffensive, 'seems' to be truth-y and informative (qualities selected for in the RLHF process), and doesn't ask much of the user. The models are also steered to avoid having a clear 'point of view', agenda, point-to-make, and on on, characteristics which tend to identify a human writer. They are steered away from highly persuasive behaviour, although there is evidence that they are extremely effective at writing this way (https://www.anthropic.com/news/measuring-model-persuasivenes...). The same arguments apply to spelling and grammar errors, and so on. These are design choices for public facing, commercial products with no particular audience.

An AI detector may be able to identify that a text has some of these properties in cases where they are exceptionally obvious, but fails in the general case. Worse still, students will begin to naturally write like these tools because they are continually exposed to text produced by them!

You can easily get an LLM to produce text in a variety of styles, some which are dissimilar to normal human writing entirely, such as unique ones which are the amalgamation of many different and discordant styles. You can get the models to produce highly coherent text which is indistinguishable from that of any individual person with any particular agenda and tone of voice that you want. You can get the models to produce text with varying cadence, with incredible cleverness of diction and structure, with intermittent errors and backtracking and _anything else you can imagine. It's not super easy to get the commercial products to do this, but trivial to get an open source model to behave this way. So you can guarantee that there are a million open source solutions for students and working professionals that will pop up to produce 'undetectable' AI output. This battle is lost, and there is no closing pandora's box. My earlier point about students slowly adopting the style of the commercial LLMs really frightens me in particular, because it is a shallow, pointless way of writing which demands little to no interaction with the text, tends to be devoid of questions or rhetorical devices, and in my opinion, makes us worse at thinking.

We need to search for new solutions and new approaches for education.

flappyeagle 12 hours ago

Rather than flagging it as AI why don’t we flag if it’s good or not?

I work with people in their 30s That cannot write their way out of a hat. Who cares if the work is AI assisted or not. Most AI writing is super dry, formulaic and bad. The student doesn’t recognize this the give them a poor mark for having terrible style.

  • kreyenborgi an hour ago

    Traditional school work has rewarded exactly the formulaic dry ChatGPT language, while the free thinking, explorative and creative writing that humans excel at is at best ignored, more commonly marked down for irrelevant typos and lack of the expected structure and too much personality showing through.

  • echoangle 12 hours ago

    Because sometimes an exercise is supposed to be done under conditions that don’t represent the real world. If an exam is without calculator, you can’t just use a calculator anyways because you’re going to have one when working, too. If the assignment is „write a text about XYZ, without using AI assistance“, using an AI is cheating. Cheating should have worse consequences than writing bad stuff yourself, so detecting AI (or just not having assignments to do unsupervised) is still important.

  • Ekaros 12 hours ago

    Because often goal of assessing student is not that they can generate output. It is to ensure they have retained sufficient amount of knowledge they are supposed to retain from course and be able regurgitate it in sufficiently readable format.

    Actually being able to generate good text is entirely separate evaluation. And AI might have place there.

lelandfe 14 hours ago

The challenging thing is, cheating students also say they're being falsely accused. Tough times in academia right now. Cheating became free, simple, and ubiquitous overnight. Cheating services built on top of ChatGPT advertise to college students; Chrome extensions exist that just solve your homework for you.

  • borski 13 hours ago

    I don’t know how to break this to you, but cheating was always free, simple, and ubiquitous. Sure, ChatGPT wouldn’t write your paper; but your buddy who needed his math problem solved would. Or find a paper on countless sites on the Internet.

    • rfrey 3 hours ago

      That's just not so. Most profs were in school years before the internet was ubiquitous. And asking a friend to do your work for you is simple, but far from free.

    • crummy an hour ago

      That wasn't free; people would charge money to write essays, and essays found online would be detected as such.

    • rahimnathwani 3 hours ago

      It wasn't always free. Look at Chegg's revenue trend since ChatGPT came out.

moandcompany 13 hours ago

I'm looking forward to the dystopian sci-fi film "Minority Book Report"

  • m463 12 hours ago

    We should make an AI model called Fahrenheit 451B to detect unauthorized books.

    • moandcompany 8 hours ago

      Open Farenheit 451B will be in charge of detecting unauthorized books and streaming media, as well as unauthorized popcorn or bread.

ec109685 2 hours ago

Ycombinator has funded at least one company in this space: https://www.ycombinator.com/companies/nuanced-inc

It seems like a long term loosing proposition.

  • blitzar an hour ago

    > It seems like a long term loosing proposition.

    Sounds like a good candidate to IPO early

  • selcuka 2 hours ago

    Nothing is a losing proposition if you can convince investors for long enough.

stephenbez 9 hours ago

Are any students coming up with a process to prove their innocents when they get falsely accused?

If I was still in school I would write my docs in a Google Doc which provides the edit history. I could potentially also record video of me typing the entire document as well or screen recording my screen.

  • ec109685 2 hours ago

    That’s what the person in the article did:

    “After her work was flagged, Olmsted says she became obsessive about avoiding another accusation. She screen-recorded herself on her laptop doing writing assignments. She worked in Google Docs to track her changes and create a digital paper trail. She even tried to tweak her vocabulary and syntax. “I am very nervous that I would get this far and run into another AI accusation,” says Olmsted, who is on target to graduate in the spring. “I have so much to lose.”

  • Springtime 2 hours ago

    I don't think there's any real way around the fundamental flaw of such systems assuming there's an accurate way to detect generated text, since even motivated cheaters could use their phone to generate the text and just iterate edits from there, using identical CYA techniques.

    That said, I'd imagine if someone resorts to using generative text their edits would contain anomalies that someone legitimately writing wouldn't have in terms of building out the structure/drafts. Perhaps that in itself could be auto detected more reliably.

  • trinix912 an hour ago

    All of that still wouldn't prove that you didn't use any sorta LLM to get it done. The professor could just claim you used ChatGPT on your phone and typed the thing in, then changed it up a bit.

from-nibly 12 hours ago

This is not something that reveals how bad AI is or how dumb administration is. It's revealing how fundamentally dumb our educational system is. It's incredibly easy to subvert. And kids don't find value in it.

Helping kids find value in education is the only important concern here and adding an AI checker doesn't help with that.

  • trinix912 an hour ago

    > Helping kids find value in education is the only important concern here and adding an AI checker doesn't help with that.

    Exactly. It also does the complete opposite. It teaches kids from fairly early on that their falsely flagged texts might as well be just written with AI, further discouraging them from improving their writing skills. Which are still just as useful with AI or not.

mensetmanusman 14 hours ago

My daughter’s 7th grade work is 80% flagged as AI. She is a very good writer, it’s interesting to see how poorly this will go.

Obviously we will go back to in class writing.

  • unyttigfjelltol 13 hours ago

    The article demonstrates that good, simple prose is being flagged as AI-generated. Reminds me of a misguided junior high English teacher that half-heartedly claimed I was a plagiarist for including the word "masterfully" in an essay, when she knew I was too stupid to use a word like that. These tools are industrializing that attitude and rolling it to teachers that otherwise wouldn't feel that way.

  • testfoobar 2 hours ago

    I'd encourage you to examine the grading policies of the high schools in your area.

    What may seem obvious based on earlier-era measures of student comprehension and success is not the case in many schools anymore.

    Look up evidence based grading, equitable grading, test retake policies, etc.

  • tdeck 2 hours ago

    > Obviously we will go back to in class writing.

    That would be a pretty sad outcome. In my high school we did both in-class essays and homework essays. The former were always more poorly developed and more more poorly written. IMO students still deserve practice doing something that takes more than 45 minutes.

  • ipaddr 14 hours ago

    She should run it through ai to rewrite in a way so another ai doesn't detect it was written by ai.

    • testfoobar 2 hours ago

      I've heard some students are concerned that any text submitted to an AI-detector is automatically added to training sets and therefore will eventually will be flagged as AI.

      • itronitron an hour ago

        Well, that is how AI works.

    • minitoar 3 hours ago

      Right, I thought this was just an arms race for tools that can generate output to fool other tools.

selcuka an hour ago

New CAPTCHA idea: "Write a 200-word essay about birds".

SirMaster 2 hours ago

I guess if I was worried about this, I would just screen and camera record me doing my assignments as proof I wasn't using an LLM aid.

kelseyfrog 11 hours ago

The problem is that professors want a test with high sensitivity and students want a test with high specificity and only one of them is in charge of choosing and administering the test. It's a moral hazard.

  • ec109685 2 hours ago

    Do professors really not want high specificity too? Why would they want to falsely accuse anyone?

  • tightbookkeeper 2 hours ago

    No. Professors want students that don’t cheat so they never have to worry about it.

    This is an ethics problem (people willing to cheat), this is a multi cultural problem (different expectations of what constitutes cheating) this is an incentive problem (credentialism makes cheating worth it).

    Those are hard problems. So a little tech that might scare students and give the professor a feeling of control is a band aid.

ameister14 13 hours ago

The article mentions 'responsible' grammarly usage, which I think is an oxymoron in an undergraduate or high school setting. Undergrad and high school is where you learn to write coherently. Grammarly is a tool that actively works against that goal because it doesn't train students to fix the grammatical mistakes, it just fixes it for them and they become steadily worse (and less detail oriented) writers.

I have absolutely no problem using it in a more advanced field where the basics are already done and the focus is on research, for example, but at lower levels I'd likely consider it dishonest.

  • borski 13 hours ago

    My wife is dyslexic; grammarly makes suggestions, but it doesn’t fix it for her. Perhaps that’s a feature she doesn’t have turned on?

    She loves it. It doesn’t cause her to be any less attentive to her writing; it just makes it possible to write.

    • ameister14 6 hours ago

      >It doesn’t cause her to be any less attentive to her writing; it just makes it possible to write.

      I was not really referring to accommodations under the ADA. For people that do not require accommodations, the use of them is unfair to their classmates and can be detrimental to their ability to perform without them in the future, as there is no requirement to have the accommodations available to them. This is not the case for someone with dyslexia.

      • borski 3 hours ago

        Fair, I can see why it looks like I confused them. I was solely using her an example; my point is that grammarly hasn’t caused her knowledge of grammar to get worse, only better. It has taught her over time.

OutOfHere 11 hours ago

I am glad I am done with schooling. I would not want to be a student in this hellscape.

For those going to college, I strongly advise picking a department where such scanning is not performed.

For those in public school, sue.

  • kelseyfrog 2 hours ago

    I'm returning to complete a single class: the writing requirement. It's not that bad. You just run your paper through a 3rd party AI checker beforehand and then cross your fingers and hit submit. You're probably at lower risk than people who don't check. You don't have to outrun the bear, just your fellow students.

more_corn 14 hours ago

A student I know texted me, the ai detector kept falsely flagging his work. “This is how I write!” I gave him some tips to sound less like ai which is funny because we train ai with rlhf to sound more and more like humans.

Animats 12 hours ago

Most of the ChatGPT type systems have a rather blah default style. That's what you learn as a non-native speaker of the language. Thus the problem for people who learned English from textbooks.

Amusingly, you can push ChatGPT type systems into other styles of writing.

I put in the preamble to the US constitution and asked for different styles:

Modern:

We, the people of the United States, come together to build a stronger, more united country. We want to create fairness for everyone, keep peace at home, ensure our safety, support each other’s well-being, and protect our freedoms for ourselves and future generations. This is why we are establishing this Constitution for the United States.

Gun nut:

We the People of the United States, rallying together to create a stronger, more unified nation, are all about upholding justice, keeping the peace at home, and defending our rights. We’re here to support one another, ensure our freedoms, and safeguard the blessings of liberty for ourselves and future generations. That’s why we’re putting this Constitution in place for the United States of America—because we believe in freedom, security, and our right to bear arms!

Woke:

We, the people of the United States, are coming together to create a more inclusive and equitable society. Our goal is to establish justice for all, promote peace within our communities, ensure safety for everyone, and uplift the well-being of every individual. We are committed to protecting our freedoms and rights, not just for ourselves, but for future generations. This is why we are enacting this Constitution for the United States of America, to foster a nation that values diversity, equality, and the dignity of all.

Christian:

We, the people of the United States, gather together in faith and unity to build a more perfect nation under God. Our purpose is to establish justice and promote peace in our homes and communities. We strive to protect our common security and care for one another’s well-being, guided by the principles of love and compassion. In doing so, we seek to uphold the blessings of liberty that God has granted us, not just for ourselves, but for future generations. This is why we commit to this Constitution for the United States of America, as we seek to honor His will in all that we do.

If you're having trouble with essays being detected as AI, you can run them through an LLM to rewrite them in a different tone.

  • Animats 12 hours ago

    Xi Thought:

    We, the people of the United States, unite to forge a more harmonious society, guided by the principles of collective strength and shared responsibility. Our aim is to establish justice and maintain social stability, ensuring the well-being of all citizens through solidarity and mutual support. By safeguarding our national defense and promoting the common welfare, we embody the spirit of unity and progress. This commitment to our foundational values is why we establish this Constitution for the United States of America, as we work toward a prosperous future for all.

    MAGA:

    We, the great people of the United States, are coming together to create a stronger, more united nation. Our mission is to establish true justice, ensure peace in our communities, and defend our country with pride. We’re all about promoting the well-being of hard-working Americans and securing our freedoms for ourselves and future generations. That’s why we’re affirming this Constitution for the United States of America—because we believe in making our nation great and preserving our rights for all!

    Martin Luther King:

    We, the people of the United States, come together in a spirit of unity and purpose, striving to create a more perfect union. It is our sacred duty to establish justice and ensure tranquility in our communities, uplifting one another in love and understanding. We must provide for the common defense and promote the welfare of all, recognizing that true freedom is a collective journey. In this endeavor, we seek to secure the blessings of liberty for ourselves and future generations. Therefore, we affirm this Constitution for the United States of America as a testament to our commitment to equality, justice, and the enduring hope for a brighter tomorrow.

    Trump:

    We, the amazing people of the United States, are coming together to build a tremendous nation, folks. Our goal? To create a more perfect union—believe me, it’s going to be fantastic! We’re establishing real justice, keeping our communities safe, and defending our great country. We’re promoting the welfare of all hardworking Americans and securing our incredible freedoms for ourselves and our future generations. That’s why we’re putting this Constitution in place for the United States of America—because we’re making America great again, and nobody does it better!

    ChatGPT has automatic blithering nailed.

rolph 14 hours ago

convergence will occur, measurable by increasing frequency of false positives output by detection.

  • HarryHirsch 14 hours ago

    You mean model collapse, because schoolchildren will soon base their writing on the awful AI slop they have read online? That's fearsome, actually.

    We are seeing this with Grammarly already, where instead of a nuance Grammarly picks the beige alternative. The forerunner was the Plain English Campaign, which succeeded in official documents publicised in imprecise language at primary school reading level, it's awful.

rowanG077 14 hours ago

This has nothing to do with AI, but rather about proof. If a teacher said to a student you cheated and the student disputes it. Then in front of the dean or whatever the teacher can produce no proof of course the student would be absolved. Why is some random tool (AI or not) saying they cheated without proof suddenly taken as truth?

  • deckiedan 14 hours ago

    The AI tool report shown to the dean with "85% match" Will be used as "proof".

    If you want more proof, then you can take the essay, give it to chatGPT and say, "Please give me a report showing how this essay is written to en by AI."

    People treat AI like it's an omniscient god.

    • deepsquirrelnet 13 hours ago

      I think what you pointed out is exactly the problem. Administrators apparently don’t understand statistics and therefore can’t be trusted to utilize the outputs of statistical tools correctly.

  • JumpCrisscross 14 hours ago

    > the teacher can produce no proof

    For an assignment completed at home, on a student's device using software of a student's choosing, there can essentially be no proof. If the situation you describe becomes common, it might make sense for a school to invest into a web-based text editor that capture keystrokes and user state and requiring students use that for at-home text-based assignments.

    That or eliminating take-home writing assignments--we had plenty of in-class writing when I went to school.

    • xnyan 12 hours ago

      >For an assignment completed at home, on a student's device using software of a student's choosing, there can essentially be no proof

      According to an undergraduate student who babysits for our child, some students are literally screen recording the entire writing process, or even recording themselves writing at their computers as a defense against claims of using AI. I don't know how effective that defense is in practice.

      • JumpCrisscross 5 hours ago

        I hate that because it implies a presumption of guilt.

  • happymellon 13 hours ago

    Unfortunately with AI, AI detection, and schools its all rather Judge Dredd.

    They issue the claim, the judgement and the penalty. And there is nothing you can do about it.

    Why? Because they *are* the law.

    • borski 13 hours ago

      That’s not even remotely true. You can raise it with the local board of education. You can sue the board and/or the school.

      You can sue the university, and likely even win.

      They literally are not the law, and that is why you can take them to court.

      • HarryHirsch 13 hours ago

        In real life it looks like this: https://www.foxnews.com/us/massachusetts-parents-sue-school-...

        A kid living in a wealthy Boston suburb used AI for his essay (that much is not in doubt) and the family is now suing the district because the school objected and his chances of getting into a good finishing school have dropped.

        On the other hand you have students attending abusive online universities who are flagged by their plagiarism detector and they wouldn't ever think of availing themselves of the law. US law is for the rich, the purpose of a system is what it does.

        • borski 12 hours ago

          I’m not sure what “used AI” means here, and the article is unclear, but it sure does sound like he did have it write it for him, and his parents are trying to “save his college admissions” by trying to say “it doesn’t say anywhere that having AI write it is bad, just having other people write it,” which is a specious argument at best. But again: gleaned from a crappy article.

          You don’t need to be rich to change the law. You do need to be determined, and most people don’t have or want to spend the time.

          Literally none of that changes the fact that the Universities are not, themselves, the law.

          • HarryHirsch 12 hours ago

            The law is unevenly enforced. My wife is currently dealing with a disruptive student from a wealthy family background. It's a chemistry class, you can't endanger your fellow students. Ordinarily, one would throw the kid out of the course, but there would be pushback from the family, and so she is cautious, let's deduct a handful of points, maybe she gets it, and thus it continues.

            • borski 11 hours ago

              I completely agree that it is unevenly enforced. Still doesn't make universities the law.

      • zo1 13 hours ago

        That could take months of nervous waiting and who-knows how many wasted hours researching, talking and writing letters. The same reason most people don't return a broken $11 pot, it's cheaper and easier to just adapt and move around the problem (get a new pot) rather than fixing it by returning and "fighting" for a refund.

        • borski 13 hours ago

          I agree; I am not saying I am glad this is happening. I am saying it is untrue that universities “are the law.”

          They’re not. That doesn’t make it less stressful, annoying, or unnecessary to fight them.

  • underseacables 13 hours ago

    Universities don't exactly decide guilt by proof. If their system says you're guilty, that's pretty much it.

    • borski 13 hours ago

      Source? I was accused of a couple things (not plagiarism) at my university and was absolutely allowed to present a case, and due to a lack of evidence it was tossed and never spoken of again.

      So no, you don’t exactly get a trial by a jury of your peers, but it isn’t like they are averse to evidence being presented.

      This evidence would be fairly trivial to refute, but I agree it is a burden no student needs or wants.