• 0 Posts
  • 29 Comments
Joined 2 years ago
cake
Cake day: September 27th, 2023

help-circle
  • Different person here.

    For me the big disqualifying factor is that LLMs don’t have any mutable state.

    We humans have a part of our brain that can change our state from one to another as a reaction to input (through hormones, memories, etc). Some of those state changes are reversible, others aren’t. Some can be done consciously, some can be influenced consciously, some are entirely subconscious. This is also true for most animals we have observed. We can change their states through various means. In my opinion, this is a prerequisite in order to feel anything.

    Once we use models with bits dedicated to such functionality, it’ll become a lot harder for me personally to argue against them having “feelings”, especially because in my worldview, continuity is not a prerequisite, and instead mostly an illusion.


  • His Hyprland setup looks cool if you’re into that sorta thing but it’s just not what users just switching to mint, fedora, whatever might be looking for.

    I would not underestimate how much of a draw “it looks cool” can have on people who are not tech savy at all. If you think about what drives new phone purchases, their major version upgrades always include lots of things that are nothing but eye-candy and those are often heavily featured in their promotion material.

    If the goal is to get casual users to convert to Linux, I would argue that aesthetics is a lot more important than ANY talk about technical details, privacy, etc. If those users cared about those things, they would’ve switched already.

    Now my bigger worry is that those users will bounce off before they manage to get their setup to look as (subjectively) cool as his.






  • I wanna add to what other users already answered that this problem is not created by federation, only exacerbated.

    If I’m mod of a community and I ban your Lost_My_Mind@lemmy.world account, I cannot stop you from creating, e.g. Lost_My_M1nd@lemmy.world and coming back. Most servers have some barriers against spam account creation in place, but I’d wager you could easily create a handful of accounts on a server until they start to grip.

    Even completely centralized platforms such as Twitter and Reddit are the same. You can easily ban/block evade a couple times per timeframe.




  • Mirodir@discuss.tchncs.detoProgrammer Humor@programming.devSus
    link
    fedilink
    arrow-up
    111
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Sure. You have to solve it from inside out:

    • not()…See comment below for this one, I was tricked is a base function that negates what’s inside (turning True to False and vice versa) giving it no parameter returns “True” (because no parameter counts as False)
    • str(x) turns x into a string, in this case it turns the boolean True into the text string ‘True’
    • min(x) returns the minimal element of an iterable. In this case the character ‘T’ because capital letters come before non-capital letters, otherwise it would return ‘e’ (I’m not entirely sure if it uses unicode, ascii or something else to compare characters, but usually capitals have a lower value than non-capitals and otherwise in alphabetical order ascending)
    • ord(x) returns the unicode number of x, in this case turning ‘T’ into the integer 84
    • range(x) creates an iterable from 0 to x (non-inclusive), in this case you can think of it as the list [0, 1, 2, …82, 83] (it’s technically an object of type range but details…)
    • sum(x) sums up all elements of a list, summing all numbers between 0 and 84 (non-inclusive) is 3486
    • chr(x) is the inverse of ord(x) and returns the character at position x, which, you guessed it, is ‘ඞ’ at position 3486.

    The huge coincidental part is that ඞ lies at a position that can be reached by a cumulative sum of integers between 0 and a given integer. From there on it’s only a question of finding a way to feed that integer into chr(sum(range(x)))





  • Reading this comment gave me the idea to get Gemini’s “opinion”. Ironically, it turns out Google might’ve actually done better if they did that.

    AI conversation

    Prompt:

    I design ads for Google as a living and was charged with writing an ad for you (Gemini) to be aired during ad breaks at the Olympics .
    My current idea is as follows:
    A father wants to help his daughter write a letter to her idol, Olympic track star Sydney McLaughlin-Levrone. In a voiceover, the dad says his daughter “might even be the world’s No. 1 Sydney fan.” In the add he will ask you, Gemini, to “help my daughter write a letter telling Sydney how inspiring she is and be sure to mention that my daughter plans on breaking her world record one day. (She says sorry, not sorry.)”
    What do you think of this idea?

    Gemini output (emphasis mine):

    Evaluation of the Ad Idea
    The concept of using Gemini to help a young fan write a letter to her idol is heartwarming and relatable. It aligns well with the aspirational and inspiring tone of the Olympics.
    However, there are a few points to consider:
    Potential Issues
    Overemphasis on competition: The line “She says sorry, not sorry” might come across as overly competitive, which could potentially alienate viewers.
    Dependence on AI: While showcasing Gemini’s capabilities, it’s important to maintain a balance. Over-reliance on AI to write the letter might diminish the authenticity and emotional impact of the story.
    Target audience: Consider the target audience for the Olympics. While the story is heartwarming, it might resonate more with a family audience than with core sports fans.
    Suggestions
    Focus on inspiration: Instead of emphasizing competition, focus on the inspiring qualities of Sydney McLaughlin-Levrone and how she motivates the young girl.
    Highlight human connection: Emphasize the bond between the father and daughter and how they are using Gemini as a tool to connect with their idol.
    Visual storytelling: Use strong visuals to capture the emotion and excitement of the story. Show the girl’s admiration for Sydney, the father’s support, and the magic of Gemini helping them craft the perfect letter.
    Overall, the idea has potential, but refining the focus and execution can elevate the ad’s impact.
    Would you like to explore some alternative approaches or focus on developing the concept further?


  • I didn’t say that.

    I expect it to be about as awful as Starfield. However, unlike Starfield (which didn’t sell horrendously by any source I can find, just not great) it has incredible brand recognition behind it. I have no doubts it will sell based on that alone as long as it looks like Skyrim 2 at first glance.

    Edit: right after posting I figured out how to formulate what else I wanted to say but couldn’t find the correct words for: “Sadly profitability and quality don’t always correlate.”






  • It’s not as accurate as you’d like it to be. Some issues are:

    • It’s quite lossy.
    • It’ll do better on images containing common objects vs rare or even novel objects.
    • You won’t know how much the result deviates from the original if all you’re given is the prompt/conditioning vector and what model to use it on.
    • You cannot easily “compress” new images, instead you would have to either finetune the model (at which point you’d also mess with everyone else’s decompression) or do an adversarial attack onto the model with another model to find the prompt/conditioning vector most likely to create something as close as possible to the original image you have.
    • It’s rather slow.

    Also it’s not all that novel. People have been doing this with (variational) autoencoders (another class of generative model). This also doesn’t have the flaw that you have no easy way to compress new images since an autoencoder is a trained encoder/decoder pair. It’s also quite a bit faster than diffusion models when it comes to decoding, but often with a greater decrease in quality.

    Most widespread diffusion models even use an autoencoder adjacent architecture to “compress” the input. The actual diffusion model then works in that “compressed data space” called latent space. The generated images are then decompressed before shown to users. Last time I checked, iirc, that compression rate was at around 1/4 to 1/8, but it’s been a while, so don’t quote me on this number.

    edit: fixed some ambiguous wordings.