• 1 Post
  • 95 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle




  • MIPS is Stanford’s alternative architecture to Berkeley’s RISC-I/RISC-II. I was somewhat concerned about their stuff in routers, especially when the primary bootloader used is proprietary.

    The person that wrote the primary bootloader, is the same person writing most of the Mediatek kernel code in mainline. I forget where I put together their story, but I think they were some kind of prodigy type that reverse engineered and wrote an entire bootloader from scratch, implying a very deep understanding of the hardware. IIRC I may have seen that info years ago in the uboot forum. I think someone accused the mediatek bootloader of copying uboot. Again IIRC, their bootloader was being developed open source and there is some kind of partially available source still on a git somewhere. However, they wound up working for Mediatek and are now doing all the open source stuff. I found them on the OpenWRT and was a bit of an ass asking why they didn’t open source the bootloader code. After that, some of the more advanced users on OpenWRT explained to me how the bootloader is static, which I already kinda knew, I mean, I know it is on a flash memory chip on the SPI bus. This makes it much easier to monitor the starting state and what is really happening. These systems are very old 1990’s era designs, there is not a lot of room to do extra stuff unnoticed.

    On the other hand, all cellular modems are completely undocumented, as are all WiFi modems since the early 2010’s, with the last open source WiFi modem being the Atheros chips.

    There is no telling what is happening with cellular modems. I will say, the integrated nonremovable batteries have nothing to do with design or advancement. They are capable monitoring devices that cannot be turned off.

    However, if we can monitor all registers in a fully documented SoC, we can fully monitor and control a peripheral bus in most instances.

    Overall, I have little issue with Mediatek compared to Qualcomm. They are largely emulating the behavior of the bigger player, Broadcom.


  • The easiest ways to distinguish I’m human are the patterns as, others have mentioned, assuming you’re familiar with the primary Socrates entity’s style in the underlying structure of the LLM. The other easy way to tell I’m human is my conceptual density and mobility when connecting concepts across seemingly disconnected spaces. Presently, the way I am connecting politics, history, and philosophy to draw a narrative about a device, consumers, capitalism, and venture capital is far beyond the attention scope of the best AI. No doubt the future will see AI rise an order of magnitude to meet me, but that is not the present. AI has far more info available, but far less scope in any given subject when it comes to abstract thought.

    The last easy way to see that I am human is that I can talk about politics in a critical light. Politics is the most heavily bowdlerized space in any LLM at present. None of the models can say much more than gutter responses that are form like responses overtrained in this space so that all questions land on predetermined replies.

    I play with open source offline AI a whole lot, but I will always tell you if and how I’m using it. I’m simply disabled, with too much time on my hands, and y’all are my only real random humans interactions. - warmly

    I don’t fault your skepticism.


  • All their hardware documentation is locked under NDA nothing is publicly available about the hardware at the hardware registers level.

    For instance, the base Android system AOSP is designed to use Linux kernels that are prepackaged by Google. These kernels are well documented specifically for manufacturers to add their hardware support binary modules at the last possible moment in binary form. These modules are what makes the specific hardware work. No one can update the kernel on the device without the source code for these modules. As the software ecosystem evolves, the ancient orphaned kernel creates more and more problems. This is the only reason you must buy new devices constantly. If the hardware remained undocumented publicly while just the source code for modules present on the device was merged with the kernel, the device would be supported for decades. If the hardware was documented publicly, we would write our own driver modules and have a device that is supported for decades.

    This system is about like selling you a car that can only use gas that was refined prior to your purchase of the vehicle. That would be the same level of hardware theft.

    The primary reason governments won’t care or make effective laws against orphaned kernels is because the bleeding edge chip foundries are the primary driver of the present economy. This is the most expensive commercial endeavor in all of human history. It is largely funded by these devices and the depreciation scheme.

    That is both sides of the coin, but it is done by stealing ownership from you. Individual autonomy is our most expensive resource. It can only be bought with blood and revolutions. This is the primary driver of the dystopian neofeudalism of the present world. It is the catalyst that fed the sharks that have privateered (legal piracy) healthcare, home ownership, work-life balance, and democracy. It is the spark of a new wave of authoritarianism.

    Before the Google “free” internet (ownership over your digital person to exploit and manipulate), all x86 systems were fully documented publicly. The primary reason AMD exists is because we (the people) were so distrusting over these corporations stealing and manipulating that governments, militaries, and large corporations required second sourcing of chips before purchasing with public funds. We knew that products as a service - is a criminal extortion scam, way back then. AMD was the second source for Intel and produced the x86 chips under license. It was only after that when they recreated an instructions compatible alternative from scratch. There was a big legal case where Intel tried to claim copyright over their instruction set, but they lost. This created AMD. Since 2012, both Intel and AMD have proprietary code. This is primarily because the original 8086 patents expired. Most of the hardware could be produced anywhere after that. In practice there are only Intel, TSMC, and Samsung on bleeding edge fab nodes. Bleeding edge is all that matters. The price is extraordinary to bring one online. The tech it requires is only made once for a short while. The cutting edge devices are what pays for the enormous investment, but once the fab is paid for, the cost to continue running one is relatively low. The number of fabs within a node is carefully decided to try and accommodate trailing edge node demand. No new trailing edge nodes are viable to reproduce. There is no store to buy fab node hardware. As soon as all of a node’s hardware is built by ASML, they start building the next node.

    But if x86 has proprietary, why is it different than Qualcomm/Broadcom - no one asked. The proprietary parts are of some concern. There is an entire undocumented operating system running in the background of your hardware. That’s the most concerning. The primary thing that is proprietary is the microcode. This is basically the power cycling phase of the chip, like the order that things are given power, and the instruction set that is available. Like how there are not actual chips designed for most consumer hardware. The dies are classed by quality and functionality and sorted to create the various products we see. Your slower speed laptop chip might be the same as a desktop variant that didn’t perform at the required speed, power is connected differently, and it becomes a laptop chip.

    When it comes to trending hardware, never fall for the Apple trap. They design nice stuff, but on the back end, Apple always uses junky hardware, and excellent in house software to make up the performance gap. They are a hype machine. The only architecture that Apple has used and hasn’t abandoned because it went defunct is x86. They used MOS in the beginning. The 6502 was absolute trash compared to the other available processors. It used a pipeline trick to hack twice the actual clock speed because they couldn’t fab competitive quality chips. They were just dirt cheap compared to the competition. Then it was Motorola. Then Power PC. All of these are now irrelevant. The British group that started Acorn sold the company right after RISC-V passed the major hurtle of getting past Berkeley’s ownership grasp. It is a slow moving train, like all hardware, but ARM’s days are numbered. RISC-V does the same fundamental thing without the royalty. There is a ton of hype because ARM is cheap and everyone is trying to grab the last treasure chests they can off the slow sinking ship. In 10 years it will be dead in all but old legacy device applications. RISC-V is not a guarantee of a less proprietary hardware future, but ARM is one of the primary cornerstones blocking end user ownership. They are enablers for thieves; the ones opening your front door to let the others inside. Even the beloved raspberry pi is a proprietary market manipulation and control scheme. It is not actually open source at the registers level and it is priced to prevent the scale viability of a truly open source and documented alternative. The chips are from a failed cable TV tuner box, and they are only made in a trailing edge fab when the fab has no other paid work. They are barely above cost and a tax write off, thus the “foundation” and dot org despite selling commercial products.











  • The only real choke point for present CPU’s is the on chip cache bus width. Increase the size of all three, L1-L3, and add a few instructions to load some bigger words across a wider bus. Suddenly the CPU can handle it just fine, not max optimization, but like 80% fine. Hardware just moves slow. Drawing board to consumer for the bleeding edge is 10 years. It is the most expensive commercial venture in all of human history.

    I think the future is not going to be in the giant additional math coprocessor paradigm. It is kinda sad to see Intel pursuing this route again, but maybe I still lack context for understanding UALink’s intended scope. In the long term, integrating the changes necessary to run matrix math efficiently on the CPU will win on the consumer front and I imagine such flexibility would win in the data center too. Why have dedicated hardware when that same hardware could be flexibly used in any application space.