The catarrhine yerba mate enjoyer who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

Кўис кредис ессе, Беллум?

  • 0 Posts
  • 122 Comments
Joined 3 years ago
cake
Cake day: April 9th, 2021

help-circle

  • Yup. Google consistently gets rid of features or services that it deems unprofitable. And that’s fine, really - as long as you don’t pretend that you’re doing it for the users.

    To be fair in modern phones there are some features that if removed would make the user experience better.

    I hear ya - for example, the SIM toolkit being able to send you pop-ups (phone providers use that to spam the users).


  • We’re removing some underutilized features in Google Assistant to focus on delivering the best possible user experience.

    Is this the non sequitur used nowadays to explain removal of features? “We’re removing it to give you a better experience”??? That’s bloody hilarious.

    Be honest at least dammit. If you don’t want to maintain a feature, because it’s against your best interests, say so. Users are not stupid, and should not be implied to be stupid with this idiotic “it’s for you lol” discourse.

    (I don’t even use Botnet Assistant.)



  • I agree too much with the text to comment anything meaningful about it. So let’s see the comments…

    One aspect of the spread of LLMs is that we have lost a useful heuristic. Poor spelling and grammar used to be a signal used to quickly filter out worthless posts. […]

    Although I agree with the title, I also don’t think the internet is that significantly different from before GPTs 4, 3, or 2. Articles written by interns or Indian virtual assistants about generic topics are pretty much as bad as most AI generated material […]

    Both comments reminded me a blogpost that I wrote more than a year ago, regarding chatGPT-3. It still applies rather well to 2024 LLMs, and it shows what those two tech bros are missing, so I’ll copypaste it here.

    ###The problem with GPT3.

    Consider the following two examples.

    Example A.

    GPT3 bots trained on the arsehole of the internet (Reddit), chatting among themselves:

    1. LPT: When you’re planning your next vacation, be sure to consider the cost of airfare and hotels. You may not have the money to go on a luxury trip, but at least you won’t have to worry about the economy when you leave! // You’re worth it!
    2. (to #1) This is great advice! I definitely need to start considering the cost of airfare and hotels when planning my next vacation. Thank you for the tips!
    3. (to #2) I completely agree! When I’m planning my next vacation, my number one priority is to make sure that it’s worth the cost. Thanks for the advice!
    4. (to #2) Keep up the great work!

    The grammar is fine, and yet those messages don’t say jack shit.

    • #1 - the so-called “life pro tip” is fairly obvious, since people are aware that airfare and hotels cost quite a bit. It is not informing the reader about something potentially missed.
    • #1 - “You may not have the money to go on a luxury trip” is extraneous, no matter how you interpret it, as it’s directed towards people who won’t fly and spend time in hotels.
    • #1 - How the hell are you expected to worry less or more about the economy, depending on how you plan your vacations?
    • #1 - you’re worth… what? The vacations? Not worrying about the economy? Something else?
    • #2 - needlessly repeating a huge chunk of #1.
    • #3 and #4 - it’s clear that #1 and #2 are different participants, #2 provided nothing worth thanking, and yet it’s still being thanked. Why?

    Example B.

    Human translation made by someone with not-so-good grasp of the target language.

    Captain: What happen ?
    Mechanic: Somebody set up us the bomb.
    Operator: We get signal.
    Captain: What !
    Operator: Main screen turn on.
    Captain: It's you !!
    CATS: How are you gentlemen !!
    CATS: All your base are belong to us.
    CATS: You are on the way to destruction.
    

    The grammar is so broken that this excerpt became a meme. And yet you can still retrieve meaning from it:

    • Captain, Mechanic and Operator are the crew of a ship
    • Captain asks for info
    • Someone is trying to kill them with a bomb
    • Operator and Mechanic inform Captain on what happens
    • CATS sarcastically greets the crew, and provides them info to make them feel hopeless
    • Captain expresses distress towards CATS

    What’s the difference? It’s purpose. In (B) we can give each utterance a purpose, even if the characters are fictional - because they were written by a human being. However, we cannot do the same in (A), because the current AI-generated text does not model that purpose.

    And yes, assigning purpose to your utterances is part of the language. Not just what tech bros are able to see, namely: syntax, morphology, and spelling.


  • Archive link.

    Personal take: suck it up, Somalia; if the population of Somaliland has effective control of the region, and desires it to be independent, then there isn’t much that you could (or should) do. And from that, if both Somaliland and Ethiopia reach an amicable agreement over the ports, so be it.

    Also, let us drop all that babble about territorial integrity. Even if you believe in this sort of political superstition, Somalia’s territorial integrity went kaboom in 1991.



  • Lvxferre@lemmy.mltoHacker News@derp.fooI Hate AI Licenses
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 months ago

    It sounds more like a piece of protest than an actual license. Any arsehole could interpret the situation as “since this is software and not a human being, it isn’t ackshuyally learning anything. So our usage of your content doesn’t violate your license lol lmao”.


    I think that people are correctly angry at big tech calling dibs on whatever it wants, but misblaming it on AI. There are worse cases out there; have you heard about targeted advertisement, for example?


  • Lvxferre@lemmy.mltoHacker News@derp.fooReturn to Innocence
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 months ago

    The article mentions what the author sees as two trends behind retrocomputing (reusing the old and escaping the modern world), but I’d argue that the second one should be further split, since there are two modernities that people are running away from:

    • unnecessary complexity. Since the old stuff performs the same role as the new one, minus unnecessary delays or elements introducing cognitive load, might as well use the old stuff.
    • lack of control. I’m not talking about an AI takeover or shit like this. Simpler stuff: big tech has become considerably better, over time, in bossing you around so you do its bidding, even against your best interests.

  • For further info, the link mentions this article. If I got it correctly:

    Higher pressure compresses the orbitals of the sodium atoms, making them more cluttered together. As a result, the outer electrons - that “should” be in the 3s orbital, surrounding the nucleus like a bubble - are repelled to more energetic orbitals, like 3p and 3d. Those orbitals have “lobes” reaching far from the nucleus, so further away from the other electrons.

    But since the sodium atoms are not isolated, and all those sodium atoms are doing this at the same time, the 3p and 3d orbitals from multiple atoms overlap. Orbitals overlapping form a chemical bond. And, since it’s damn hard to remove electrons from those bonds to send them elsewhere, electrical conductivity goes down. Sodium becomes first a semiconductor, then an isolating material.

    So it’s a lot like your usual macromolecules (like, silicon dioxide or diamond), except that those bonds are shared by multiple atoms, not just two. And I don’t think that it’s a coincidence that all three are transparent, given that those electrons “stuck” in specific molecular orbitals suck major balls at absorbing photons and releasing them back.

    Personal predictions:

    • high-pressure sodium should be bloody hard, and not malleable at all. Kind of funny given that normal pressure sodium is really soft.
    • other s-block metals will behave similarly under high pressure. If exceptions exist, they’ll be the largest ones (in this order: radium, francium, barium, caesium).
    • aluminium and gallium might behave similarly, but you’ll need a lot more pressure to pull it out. (Note: this is completely unrelated to a certain oxygen/nitrogen/aluminium ceramic that was developed recently.)
    • d-block metals like iron are probably unaffected.


  • I’ve done similar experiments with my two cats. Both behaved mostly like dogs - the mirror doesn’t smell like a cat nor makes noise like a cat, so why bother with it? I was rather surprised with Siegfrieda ignoring it because she tends to watch whatever I put on the computer screen, be it some “cat game” video or even anime.

    That lower emphasis on vision became specially obvious when I showed them videos with kittens meowing. They didn’t bother with the screen, but with the speakers.


  • Now I get it. And yes, now I agree with you; it would give them a bit more merit to claim that the data being used in the input was obtained illegally. (Unless Meta has right of use to ThePile.)

    The link does not mention GPT (OpenAI, Microsoft) or LaMDA/Bard (Google, Alphabet), but if Meta is doing it odds are that the others are doing it too.

    Sadly this would be up to the copyright holders of this data. It does not apply to NYT content that you can freely access online, for NYT it got to be about the output, not the input.





  • Threads like this are why I discuss this shit in Lemmy, not in HN itself. The idiocy in the comments there is facepalm-worthy.

    Plenty users there are trapping themselves in the “learning” metaphor, as if LLMs were actually “learning” shit like humans would. It’s a fucking tool dammit, and it is being legally treated as such.

    The legal matter here boils down to: OpenAI is picking content online, feeding it into a tool, the tool transforms it into derivative content, and the derivative content is serviced to users. Is the transformation deep enough to make said usage go past copyright? A: nobody decided yet.


  • Classical Physics breaks in three situations: if it’s too fast, too massive, or too tiny. To address that, new theories appeared. Among them:

    • special relativity - handles fast stuff
    • general relativity - handles fast and massive stuff
    • quantum mechanics - handles tiny stuff
    • quantum field theory - handles tiny and fast stuff

    What researchers are looking for is a theory that is able to handle all three things at the same time, superseding both the relativities and the quantum theories. That’s the Theory of Everything that everyone is looking for. (Except me. I’m looking for my cup of coffee.)

    And most people look for it in a specific way: they try to adapt relativity to quantum phenomena. Those researchers however are doing something different: they’re imposing a limit on the quantum theories, saying that they break under specific situations, because spacetime would work more like in classical physics than like in quantum mechanics - in other words that quantum theories need to be fixed to relativity, not the opposite.

    The researchers then devised a stupidly simple experiment to test their hypothesis out.



  • Lvxferre@lemmy.mltoHacker News@derp.fooDon't Say Velcro
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    6 months ago

    I’m always amused by shitty corporate attempts to boss people around on language usage. They’re bound to fail, even if you screech “noo! muh traremurrk!” nonstop.

    Although… this is smelling a bit like advertisement disguised as “brand awareness”. If that’s correct the HN OP is biting the bait.

    Is this a uniquely US thing?

    Among Portuguese speakers in my chunk of Brazil I’ve seen at the least the following genericised brands:

    • nescau [nes.'käʊ̯]- for any milk chocolate. Even from brands not associated with child slavery, like Nestlé.
    • todinho [tɔ.'dʒi.ɲo] - same as above, with another brand. And now I’m joining the majority who doesn’t remember how to spell this brand. (I think that it uses “ddy” instead of “di”?)
    • xerox [ʃe.'ɾɔks] - photocopy; highly productive, you’ll also see “xerocar” (verb; to photocopy), “xerocaria” (noun; an establishment where you can photocopy stuff, often found near universities), even “xerocável” (adjective: something that can be easily photocopied, e.g. soft books)
    • bombril [bõ.'bɾiʊ̯] - steel wool, specially the cheaper ones.
    • sapólio [sä.'pɔ.ʎo] - any heavy duty liquid soap.
    • veja ['ve.ʒɐ] - any ammonium-based cleaning agent. The name is the same as a conservative magazine, but that’s a coincidence.
    • q-boa [ki.'bo.ɐ] - bleach

    (Pronunciation for reference, it might vary quite a bit depending on individual. For example I tend to use [ks] for “xerox”, but plenty people add an epenthetic vowel to it.)