Stay away from ARM laptops and SoCs, they aren't there yet when it comes to Linux. If you like to tinker, go for it, but expect hardware to just not work, or worse, you'll get stuck on a kernel fork that never gets updated.
If you want a good Linux machine, buy one from a vendor that explicitly sells and supports machines with Linux on them.
IMO you can tinker as much as you want without forcing hardware compatibility issues upon yourself in order to have something to tinker with.
The Thinkpad x13s is more-or-less there. I've been using it as my primary machine (and laptop) for the last month, and it 'just works'. All day battery life, fanless so it's dead silent, and a crisp screen with decent DPI. KDE and Vivaldi run as fast as my i7-13700 desktop.
That seems to be the conclusion I have been avoiding to reach. With graviton and other arm based linux server machines being a good bulk of my work I hoped I wouldn't have to worry about multi architecture docker builds. Ah well.
Any suggestions for something well built but lightweight and that one could figure out how to get 8+ hours of actual daily usage battery life on?
I've had a great experience with my Framework 13 (AMD), although I usually get 4-5 hours of battery life, so not quite the full 8 hours you're looking for.
I have tried multiple framework devices as a partner firm uses them. Good devices, want to support what they are doing but yeah battery life is really lacking for my use case.
Others have mentioned thinkpads and in my experience the better ones all get 8h+, just stay away from the X1 carbon (my current work machine) with hybrid nvidia graphics. Those have problems of not turning off the external GPU and sucking the battery empty, but that isn't just a Linux problem it seems from lots of forum posts.
It has been a while since I daily drove one but my old laptop used to have an nvidia hybrid setup and it was possible to get power management to work decently with it but that might have been me being lucky with the configuration. Thanks for the headsup.
There's an angle where criminal intent doesn't matter when it comes to negligence and damages. They have to had known that their scrapers would cause denial of service, unauthorized access, increased costs for operators, etc.
That's not a certain outcome. If you're willing to do this case, I can provide access logs and any evidence you want. You can keep any money you win plus I'll pay a bonus on top! Wanna do it?
Keep in mind I'm in Germany, the server is in another EU country, and the worst scrapers overseas (in China, USA, and Singapore). Thanks to these LLMs there is no barrier to have the relevant laws be translated in all directions I trust that won't be a problem! :P
> criminal intent doesn't matter when it comes to negligence and damages
Are you a criminal defense attorney or prosecutor?
> They have to had known
IMO good luck convincing a judge of that... especially "beyond a reasonable doubt" as would be required for criminal negligence. They could argue lots of other scrapers operate just fine without causing problems, and that they tested theirs on other sites without issue.
LLMs quite literally work at the level of their source material, that's how training works, that's how RAG works, etc.
There is no proof that LLMs work at the level of "ideas", if you could prove that, you'e solve a whole lot of incredibly expensive problems that are current bottlenecks for training and inference.
It is a bit ironic that you'd call someone wanting to control and be paid for the thing they themselves created "selfish", while at the same time writing apologia on why it's okay for a trillion dollar private company to steal someone else's work for their own profit.
It isn't some moral imperative that OpenAI gets access to all of humanity's creations so they can turn a profit.
While it's definitely possible to train a model for that, 'very easy' is nonsense.
Unless you've got some superintelligence hidden somewhere, you'd choose a neural net. To train, you need a large supply of LABELED data. Seems like a challenge to build that dataset; after all, we have no scalable method for classifying as of yet.
There isn't a technical solution to this: governments and providers not only want proof of identity matching IDs, they want proof of life, too.
This will always end with live video of the person requesting to log in to provide proof of life at the very least, and if they're lazy/want more data, they'll tie in their ID verification process to their video pipeline.
That's not not the kind of proof of life the government and companies want online. They want to make sure their video identification 1) is of a living person right now, and 2) that living person matches their government ID.
It's a solution to the "grandma died but we've been collecting her Social Security benefits anyway", or "my son stole my wallet with my ID & credit card", or (god forbid) "We incapacitated/killed this person to access their bank account using facial ID".
It's also a solution to the problem advertisers, investors and platforms face of 1) wanting huge piles of video training data for free and 2) determining that a user truly is a monetizable human being and not a freeloader bot using stolen/sold credentials.
> That's not not the kind of proof of life the government and companies want online.
Well that's your assumption about governments, but it doesn't have to be true. There are governments that don't try to exploit their people. The question is whether such governments can have technical solutions to achieve that or not (I'm genuinely interested in understanding whether or not it's technically feasible).
It's the kind of proof my government already asks of me to sign documents much, much more important than watching adult content, such as social security benefits.
Might be a case of those who are permanently out of the workforce because they couldn't find work are not counted in unemployment statistics, along with flight and deaths of despair.
AI output isn't copyrighted in the US.
reply