Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP).

(header photo by Brian Maffitt)

  • 26 Posts
  • 83 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle
  • A more onion-y title would be something like “Conservative commentator quotes Marx, calls for mass protests and strikes”.

    The actual title is more just !ironicorsurprisingnews than !nottheonion material imo


    Edit: You’ve editorialized the title?

    Posts must be:

    1. Links to news stories from…
    2. …credible sources, with…
    3. …their original headlines, that…
    4. …would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

    Unless it was changed post-publication, the original is

    Conservative NYT Columnist David Brooks Calls for ‘National Civic Uprising’ to Defeat Trumpism – Complete With ‘Mass Rallies, Strikes’

    Imo that’s actually more onion-y than the changed title




  • You’re making assumptions about how they work based on your intuition - luckily we don’t need to do much guesswork about how the sorts are actually implemented because we can just look at the code to check:

    CREATE FUNCTION r.scaled_rank (score numeric, published timestamp with time zone, interactions_month numeric)
        RETURNS double precision
        LANGUAGE sql
        IMMUTABLE PARALLEL SAFE
        -- Add 2 to avoid divide by zero errors
        -- Default for score = 1, active users = 1, and now, is (0.1728 / log(2 + 1)) = 0.3621
        -- There may need to be a scale factor multiplied to interactions_month, to make
        -- the log curve less pronounced. This can be tuned in the future.
        RETURN (
            r.hot_rank (score, published) / log(2 + interactions_month)
    );
    

    And since it relies on the hot_rank function:

    CREATE FUNCTION r.hot_rank (score numeric, published timestamp with time zone)
        RETURNS double precision
        LANGUAGE sql
        IMMUTABLE PARALLEL SAFE RETURN
        -- after a week, it will default to 0.
        CASE WHEN (
    now() - published) > '0 days'
            AND (
    now() - published) < '7 days' THEN
            -- Use greatest(2,score), so that the hot_rank will be positive and not ignored.
            log (
                greatest (2, score + 2)) / power (((EXTRACT(EPOCH FROM (now() - published)) / 3600) + 2), 1.8)
        ELSE
            -- if the post is from the future, set hot score to 0. otherwise you can game the post to
            -- always be on top even with only 1 vote by setting it to the future
            0.0
        END;
    

    So if there’s no further changes made elsewhere in the code (which may not be true!), it appears that hot has no negative weighting for votes <2 because it uses the max value out of 2 and score + 2 in its calculation. If correct, those posts you’re pointing out are essentially being ranked as if their voting score was 2, which I hope helps to explain things.


    edit: while looking for the function someone else beat me to it and it looks like possibly the hot_rank function I posted may or may not be the current version but hopefully you get the idea regardless!
















  • Additionally, it’s helpful to know the specific language used in Article 5:

    Article 5

    “The Parties agree that an armed attack against one or more of them in Europe or North America shall be considered an attack against them all and consequently they agree that, if such an armed attack occurs, each of them, in exercise of the right of individual or collective self-defence recognized by Article 51 of the Charter of the United Nations, will assist the Party or Parties so attacked by taking forthwith, individually and in concert with the other Parties, such action as it deems necessary, including the use of armed force, to restore and maintain the security of the North Atlantic area.

    Any such armed attack and all measures taken as a result thereof shall immediately be reported to the Security Council. Such measures shall be terminated when the Security Council has taken the measures necessary to restore and maintain international peace and security.” (emphasis added)

    Article 5 doesn’t actually oblige NATO members to defend anything by force, it obliges NATO members to decide what actions are “deemed necessary” and then to undertake those actions. If a NATO member gets invaded, everyone could – in theory – write a sternly worded letter and call it a day (though I doubt that would be the actual response). As you/others have more or less said, the actual action chosen would largely be the result of political will.






  • So they literally agree not using an LLM would increase your framerate.

    Well, yes, but the point is that at the time that you’re using the tool you don’t need your frame rate maxed out anyway (the alternative would probably be alt-tabbing, where again you wouldn’t need your frame rate maxed out), so that downside seems kind of moot.

    Also what would the machine know that the Internet couldn‘t answer as or more quickly while using fewer resources anyway?

    If you include the user’s time as a resource, it sounds like it could potentially do a pretty good job of explaining, surfacing, and modifying game and system settings, particularly to less technical users.

    For how well it works in practice, we’ll have to test it ourselves / wait for independent reviews.


  • It sounds like it only needs to consume resources (at least significant resources, I guess) when answering a query, which will already be happening when you’re in a relatively “idle” situation in the game since you’ll have to stop to provide the query anyway. It’s also a Llama-based SLM (S = “small”), not an LLM for whatever that’s worth:

    Under the hood, G-Assist now uses a Llama-based Instruct model with 8 billion parameters, packing language understanding into a tiny fraction of the size of today’s large scale AI models. This allows G-Assist to run locally on GeForce RTX hardware. And with the rapid pace of SLM research, these compact models are becoming more capable and efficient every few months.

    When G-Assist is prompted for help by pressing Alt+G — say, to optimize graphics settings or check GPU temperatures— your GeForce RTX GPU briefly allocates a portion of its horsepower to AI inference. If you’re simultaneously gaming or running another GPU-heavy application, a short dip in render rate or inference completion speed may occur during those few seconds. Once G-Assist finishes its task, the GPU returns to delivering full performance to the game or app. (emphasis added)