![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/db7182d9-181a-45e1-b0aa-6768f144911a.jpeg)
I find it quite common (and confusing) for certain news types like policy, eg “party A reverses the disapproval to oppose the once-unacceptable ban”
I find it quite common (and confusing) for certain news types like policy, eg “party A reverses the disapproval to oppose the once-unacceptable ban”
I’m also curious. A quick search came up with these. Not sure which one is most reliable/updated
Many things are called “AI models” nowadays (unfortunately due to the hype). I wouldn’t dismiss the tools and methodology yet.
That said, the article (or the researchers) did a disservice to the analysis by not including a link to the report (and code) that outlines the methodology and how the distribution of similarities look. I couldn’t find a link in the article and a quick search didn’t turn up anything.
you should try to ask the same question using xAI / Grok if possible. May also ask ChatGPT about Altman as well
I believe experiments like these should move slower and with more scrutiny. As in more animal testing before moving on to humans, esp. due to the controversies surrounding Neuralink’s last animal experiments.
re: your last point, AFAIK, the TLDR bot is also not AI or LLM; it uses more classical NLP methods for summarization.
Is there a database tracking companies that start out with good intentions and then eventually gets bought out or sells out their initial values? I’m wondering what the deciding factors are, and how long it takes for them to turn.
Reminds me of this article https://www.alexmurrell.co.uk/articles/the-age-of-average where the author pulls in different examples of designs and aesthetics converging to some “average”.
I’m feeling conflicted with these trends, on one hand it seems like things are becoming more accessible, while on another, feels like a loss.
This especially may be relevant with generative AI - at least for the very few generative arts I look at, at some point they start to feel the same, impersonal.
They don’t seem to allow account deletions. Does it mean that this could include accounts that they still keep but people don’t use their services anymore?
suggests either these people are so detached from reality, or they are appealing this to very specific sets of people under the guise of a general appeal
the whole premise of OP is that this monitors people, and many organizations use TOTP, which one could also use without internet connections or phones AFAIK.
I’m in academia and I wish this is implemented more. Data breaches are getting quite common, and Github is so entwined in software engineering that it is critical to increase security measures.
or maybe most of them in a folder? and one file that defines their locations for environment variables
Something like this, unless they know the root cause (I didn’t read the paper so not sure if they do), or something close to it, may still be exploitable.
yeah agreed with your sentiment. I think it’s good to have an intuition about something, but it’s much better when there’s data to back it up.
Cuz then, they can do the same with others, say Youtube or other streaming services, and start to compare the numbers, like % of ads, what types of ads, how long are the ads relative to content, how many of these ads are political, how many of these ads may be harmful, …
Having these numbers can be quite handy for other researchers and regulators to look into these issues more concretely, rather than just say, “as your brothers and sisters already know, tiktok serves ads”
maybe even integration with uBlock if possible?
lol what’s the context here?
If you’ve never worked before, this can be considered practice runs for the when you do.
Like one of the other commentors said, assume everything is accessible by Google and/or your university (and later, your boss, company, organization, …).
And not just you, but the people who interact with you through it. So that means you may be able to put up defenses, but if they don’t (and they most likely do not), the data that you interact with them would likely be accessible as well.
So here are some potential suggestions to minimize private-data access by Google/university while still being able to work with others (adjust things depending on your threat model of course):