I did nothing and I’m all out of ideas!

  • 0 Posts
  • 8 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • I actually feel this is gonna make it harder for people to find out about the change, having something suddenly disappear from your feed is less visible than having reminders when you click on a new post.

    At the same time, cosidering the .org one is new, not a lot of servers/instances will have the community federated, so it will not appear even in the All feed for people, especially in the small ones.

    For those reasons I would advice for a transition period, if possible, but I can see how it could be annoying to manage.




  • I feel there’s some kind of miscommunication going on here.

    Probably I’m not understanding what you are putting forward, but to be clear: They are not doing this because they want to. They are doing it because they are forced to do it by the DMA.
    It’s true that allegedly they were working on some kind of interoperability layer already. For years now. But no evidence of it being more than lip service to avoid being regulated has ever surfaced - as far as I know.

    Which would have been in line with your “Do Nothing”.


  • as an unwilling Whatsapp user the ability to migrate without having to convince all my social circles to do anything but check a checkbox sounds like a huge step forward.

    That’s the point. I feel it will not be a “simple checkbox”, and they will make it the most obnoxious process they can using the Best Dark Patterns the industry has to offer.

    Already the general public is not interested in the alternatives or the concept of interoperability - wanting something that Just Works™ - putting in front even the smallest step (and some scary text!) will make the percentage of willing people become even lower.
    And that’s not all. As it is portraited in the article by the Threema’s spokeperson it is pretty clear that Meta will just try to make the maintenance of the communication layer as cumbersome as they can - both technically and bureaucratically.
    They are explicitly the ones keeping the reins of the standard, the features, the security model, the exchanged data and who, how and when will be approved.

    So from one side if they make it hard and scary enough to tank the use rate, they will have the excuse of not being there enough people to give priority to fix it or add features, and from the other side if maintaining the interoperability will be difficult and time consuming enough, the people and businesses from the alternatives or wrappers will not have the incentive to do or keep doing it for the long haul. As we can already see in the article.

    Is it better than nothing? Sure, probably. Will it be a slow cooking, easy to break, easy to get excluded from, just bare minimum to comply to the letter but not the spirit of the law? I feel that’s a pretty good bet to make.

    Let’s be clear: I will be extremely happy if all the red flags and warning bells that I saw in the article will just end up being figments of my imagination. But yes, I’m very pessimistic - maybe even too much - when I see these kind of corporate speech and keywords.


  • “One of the core requirements here, and this is really important, is for users for this to be opt-in,” says Brouwer. “I can choose whether or not I want to participate in being open to exchanging messages with third parties. This is important, because it could be a big source of spam and scams.”

    Let me translate this for you: "We will make users hop on the most cumbersome, frustrating and inefficient way we can think of to enable interoperability. And making it defaulted to off will mean people using other apps will need to find other channels to ask for it to be enabled on our users’ end, making it worthless.

    And don’t forget: we will put a bunch of scary warnings, and only allow to go all in, with no middle ground or granularity!"

    Great stuff, thank you. I can’t wait.

    “We don’t believe interop chats and WhatsApp chats can evolve at the same pace,” he says, claiming it is “harder to evolve an open network” compared to a closed one.

    Ah, so they are going for the Apple’s approach with iMessage and Android sms. Cool, cool.

    I hope my corporate-to-common translator is broken, because this does just sound bad.


  • I like the idea of 24 years plus the possibility of an extension with a fee, or proof of activity, for another 12 years if the owner is a person.

    If, at any point, the rights are sold, passed, or the owner is not a person (but for example a corporation or association) it should last 12 years from that time with an extension of 6, so the ability to sell your idea or give it to your spawns for a soft landing is not destroyed, but it can not be abused.

    To further avoid abuse if it has already been extended before being sold it should last only 6 years without possibility of extension. So at most it would be 42 years for active and valuable things.

    EDIT: To clarify, if it gets sold multiple times, the “timer” shouldn’t be reset but keep ticking down from the date of the first transaction, or it would open the door to accounting shenanigans just to keep it alive and locked.


  • Any foundation model is trained on a subset of common crawl.

    All the data in there is, arguably, copyrighted by one individual or another. There is no equivalent open - or closed - source dataset to it.

    Each single post, page, blog, site, has a copyright holder. In the last year big companies have started to change their TOS to make that they are able to use, relicense and generally sell your data hosted in their services as their own for the intent of AI training, so potentially some small parts of common crawl will be licensable in bulk - or directly obtained from the source.

    This does still leave out the majority of the data directly or indirectly used today, even if you were willing to pay, because it is unfeasable to search and contract every single rights holder.

    On the other side of it there have been work to use less but more heavily curated data, which could potentially generate good small, domain specific, models. But still they will not be like the ones we currently have, and the open source community will not be able to have access to the same amount and quality of data.

    It’s an interesting problem that I’m personally really interested to see where it leads.