thanks for clarifying! that’s really helpful!
thanks for clarifying! that’s really helpful!
haha nice. I’ll try that next time
gotcha, thanks for clarifying :)
“NOPE” as in “not a dark pattern” or as in “I’m not touching this site”? if former, can you clarify on the reason?
can you clarify on the 7?
thanks for confirming my suspicion. as for your question, conda in general is good for installing non-python binaries when needed, and managing env. I don’t use anaconda but it provides a good enough interface for beginners and folks without much coding experience. It’s usually the easiest to use that than other variants for them, or the python route of setting up environments
If you’ve never worked before, this can be considered practice runs for the when you do.
Like one of the other commentors said, assume everything is accessible by Google and/or your university (and later, your boss, company, organization, …).
And not just you, but the people who interact with you through it. So that means you may be able to put up defenses, but if they don’t (and they most likely do not), the data that you interact with them would likely be accessible as well.
So here are some potential suggestions to minimize private-data access by Google/university while still being able to work with others (adjust things depending on your threat model of course):
I find it quite common (and confusing) for certain news types like policy, eg “party A reverses the disapproval to oppose the once-unacceptable ban”
I’m also curious. A quick search came up with these. Not sure which one is most reliable/updated
Many things are called “AI models” nowadays (unfortunately due to the hype). I wouldn’t dismiss the tools and methodology yet.
That said, the article (or the researchers) did a disservice to the analysis by not including a link to the report (and code) that outlines the methodology and how the distribution of similarities look. I couldn’t find a link in the article and a quick search didn’t turn up anything.
you should try to ask the same question using xAI / Grok if possible. May also ask ChatGPT about Altman as well
I believe experiments like these should move slower and with more scrutiny. As in more animal testing before moving on to humans, esp. due to the controversies surrounding Neuralink’s last animal experiments.
re: your last point, AFAIK, the TLDR bot is also not AI or LLM; it uses more classical NLP methods for summarization.
Is there a database tracking companies that start out with good intentions and then eventually gets bought out or sells out their initial values? I’m wondering what the deciding factors are, and how long it takes for them to turn.
Reminds me of this article https://www.alexmurrell.co.uk/articles/the-age-of-average where the author pulls in different examples of designs and aesthetics converging to some “average”.
I’m feeling conflicted with these trends, on one hand it seems like things are becoming more accessible, while on another, feels like a loss.
This especially may be relevant with generative AI - at least for the very few generative arts I look at, at some point they start to feel the same, impersonal.
They don’t seem to allow account deletions. Does it mean that this could include accounts that they still keep but people don’t use their services anymore?
suggests either these people are so detached from reality, or they are appealing this to very specific sets of people under the guise of a general appeal
the whole premise of OP is that this monitors people, and many organizations use TOTP, which one could also use without internet connections or phones AFAIK.
I’m in academia and I wish this is implemented more. Data breaches are getting quite common, and Github is so entwined in software engineering that it is critical to increase security measures.
or maybe most of them in a folder? and one file that defines their locations for environment variables
I’ve never had an account with these. Do I need to create an account with them to freeze my credits? And what kinds of information should I give / not give when I do?