• 0 Posts
  • 24 Comments
Joined 2 months ago
cake
Cake day: July 16th, 2024

help-circle


  • I don’t entirely agree, though.

    That WAS the point of NaNoWriMo in the beginning. I went there because I wanted feedback, and feedback from people who cared (not offense to my friends, but they weren’t interested in my writing and that’s totes cool).

    I think it is a valid core desire to want constructive feedback on your work, and to acknowledge that you are not a complete perspective, even on yourself. Whether the AI can or does provide that is questionable, but the starting place, “I want /something/ accessible to be a rubber ducky” is valid.

    My main concern here is, obviously, it feels like NanoWriMo is taking the easy way out here for the $$$ and likely it’s silicon valley connections. Wouldn’t it be nice if NaNoWriMo said something like, “Whatever technology tools exist today or tomorrow, we stand for writer’s essential role in the process, and the unethical labor implications of indiscriminate, non consensus machine learning as the basis for any process.”


  • NovelAI

    I’ll step up and say, I think this is fine, and I support your use. I get it. I think that there are valid use cases for AI where the unethical labor practices become unnecessary, and where ultimately the work still starts and ends with you.

    In a world, maybe not too far in the future, where copyright law is strengthened, where artist and writer consent is respected, and it becomes cheap and easy to use a smaller model trained on licensed data and your own inputs, I can definitely see how a contextual autocomplete that follows your style and makes suggestions is totally useful and ethical.

    But i understand people’s visceral reaction to the current world. I’d say, it’s ok to stay your course.



  • Oh man, anyone who runs on such existential maximalism has such infinite power to state things as if their conclusion has only one possible meaning.

    How about invoking Monkey Paw – what if every statement is true but just not in the way they think.

    1. A perfect memory which is infinitely copyable and scaleable is possible. And it’s called, all the things in nature in sum.
    2. In fact, we’re already there today, because it is, quite literally the sum of nature. The question for tomorrow is, “so like, what else is possible?”
    3. And it might not even have to try or do anything at all, especially if we don’t bother to save ourselves from ecological disaster.
    4. What we don’t know can literally be anything. That’s why it’s important not to project fantasy, but to conserve of the fragile beauty of what you have, regardless of whether things will “one day fall apart”. Death and Taxes mate.

    And yud can both one day technically right and whose interpretations today are dumb and worthy of mockery.


  • The issue isn’t even that AI is doing grading, really. There are worlds where using technology to assist in grading isn’t a loss for a student.

    The issue is that all of this is as an excuse not to invest in students at all and the turn here is purely a symptom of that. Because in a world where we invest in technology to assist in education, the first thing that happens is we recognize the completely unsexy and obvious things that also need to happen, like funding for maintenance of school buildings, basic supplies, balancing class sizes by hiring and redistricting, you know. The obvious shit.

    But those things don’t attract the attention of the debt metabolism, they’re too obvious and don’t include more leverage for short term futures. To believe there is a future for the next generation is risk inherent and ambiguous. You can only invest in that it if you actually care.



  • Yeah, this lines up with what I have heard, too. There is always talk of new models, but even the stuff in the pipeline not yet released isn’t that differentiable from the existing stuff.

    The best explanation of strawberry is that it isn’t any particular thing, it’s rather a marketing and project framing, both internal and external, that amounts to… cost optimizations, and hype driving. Shift the goal posts, tell two stories: one is if we just get affordable enough, genAI in a loop really can do everything (probably much more modest, when genAI gets cheap enough by several means, it’ll have several more modest and generally useful use cases, also won’t have to be so legally grey). The other is that we’re already there and one day you’ll wake up and your brain won’t be good enough to matter anymore, or something.

    Again, this is apparently the future of software releases. :/











  • Short story: it’s smoke and mirrors.

    Longer story: This is now how software releases work I guess. Alot is running on open ai’s anticipated release of GPT 5. They have to keep promising enormous leaps in capability because everyone else has caught up and there’s no more training data. So the next trick is that for their next batch of models they have “solved” various problems that people say you can’t solve with LLMs, and they are going to be massively better without needing more data.

    But, as someone with insider info, it’s all smoke and mirrors.

    The model that “solved” structured data is emperically worse at other tasks as a result, and I imagine the solution basically just looks like polling multiple response until the parser validates on the other end (so basically it’s a price optimization afaik).

    The next large model launching with the new Q* change tomorrow is “approaching agi because it can now reliably count letters” but actually it’s still just agents (Q* looks to be just a cost optimization of agents on the backend, that’s basically it), because the only way it can count letters is that it invokes agents and tool use to write a python program and feed the text into that. Basically, it is all the things that already exist independently but wrapped up together. Interestingly, they’re so confident in this model that they don’t run the resulting python themselves. It’s still up to you or one of those LLM wrapper companies to execute the likely broken from time to time code to um… checks notes count the number of letters in a sentence.

    But, by rearranging what already exists and claiming it solved the fundamental issues, OpenAI can claim exponential progress, terrify investors into blowing more money into the ecosystem, and make true believers lose their mind.

    Expect more of this around GPT-5 which they promise “Is so scary they can’t release it until after the elections”. My guess? It’s nothing different, but they have to create a story so that true believers will see it as something different.