I'm not sure I'd call this risk. Risk would be "you can invest the money, but you might not get it back" however the above is referring to the "a 51% attack absolutely works but you need a shit ton of money to do it" aspect instead. This makes it capital intensive, not (necessarily) risky.
The fact that it succeeds does not mean that you get the money back (eg the price of monero could drop if that happens). You may also have miscalculated some parameters in all this or something unexpected happens (where human factor is involved). So there should always be risk involved imo. Otherwise I agree, even in a probability 1 success situation this would still not be called "cheap".
It is absolutely risky. Your facilities can burn down once the ASICs arrive and before they are turned on, or your employees simply steal them for their own uses. Heck, you can have a fire once they get powered-on, because a power cable was poorly made. You might get sent the wrong product, or you could be ghosted without a delivery.
Expensive is a better fit than capital intensive, because there are massive ongoing costs to actually perform the attack, electricity for one.
If you want to understand the risks for a project, pretend you are at arms length and are being asked to fund the project 100% up-front. You'll find a huge list of risks very soon.
This is why I didn't say it made the investment risk free, I said being capital intensive does not make something (inherently) risky. There is no such thing as an investment without risk, but how risky it is is largely orthogonal to how capital intensive it is, and the above was talking about the latter so using the term "risk" for that half is not a great correction.
Having the power to deny others to mine blocks does not mean that you can obtain the tokens from their wallets. Miners can't sign transactions on users' behalf. You can rewrite all of history but then no exchange will accept your version of it to let you exchange the tokens for fiat. Also this will almost certainly crash the price of XMR substantially. And later people will be able to fork/restore the original version. The technological side of the blockchain is only part of the consensus/trust/market/popularity. People are the other part, and people will not pay the attacker for their successful attack.
The attacker doesn't need to steal tokens. They just need to short the token while they sufficiently disrupt the network to drive down the price. They get the money and your tokens become worthless.
I was completely wrong about the cost. XMR mining rewards amount to only $150k/day.
At the height of the attack, Qubic (the company) paid people up to $3 in QUBIC for every $1 of XMR they mined through QUBIC, and they achieved around 33% of XMR's hashrate which was sufficient to mine the majority of blocks for a few hours.
If they were forced to buy back all those QUBICs they paid out, this might have cost them ~$100k/day. But thanks to the media attention it's likely that they didn't need to buy anything back and actually were able to emit more than they otherwise could have.
XMR needs to adapt -- switch to PoS, or ASICs-based POW, or a hybrid of both.
Controlling 51% of XMR costs ~$30M per day, you'd have to short a huge amount of XMR to make that worthwhile. Who would be the counter party and how would you do that anonymously?
The attack itself is unprofitable, the "profit" for Qubic is the publicity they get. (or at least that's what they're betting on)
Monero has a theoretical market cap of $4.7B USD and daily volumes >$100M USD. I wouldn't recommend taking that short position in one go but over a few days and a few exchanges I wouldn't see a problem acquiring a very large short of the token.
Ask yourself why so many content hosting platforms utilize CLoudflare's services and then contrast that perspective with your posted one. Might enlighten you a bit to think about that for a second.
Between this and Hertz's new AI damage detection models, we're seeing the enshitification of business travel reaching a new level, and also doing a great job of really ticking off a group of customers (business travelers) who are already irritated enough.
Rest markets itself as a way to "unlock a new revenue stream"
Leave it to the bean counters to see this as an opportunity to generate new revenue streams from customers while simultaneously pissing them off.
There have always been attempts to launder fraud through intermediaries - computerized, bureaucratic, or otherwise. They think (well, know) if they abstract or obfuscate things in a novel way, that they'll have enough time to hit markets across states without sophisticated legislation before the legal immune systems can respond, potentially years later.
This type of algorithmic grift is transparent to judges and people with common sense, but there doesn't seem to be a lot of interest at or outside of the federal level through regulators like the FTC to prevent it, just curtail certain circumstances.
That sounds like stuff of science fiction, can't believe it works. The best part is that it works long distance without having to have satellites in the sky... and is probably un-jammable?
Thanks for sharing this, so cool to learn about it!
The pay scale isn't really equivalent. For military doctors and dentists the typical lure is they will pay off all your student loans for a specific time commitment to the military.
R&S equipment is pretty common in the cellular industry. There is more of it out there than you would think. While this particular signal generator is maybe not the most common piece of equipment you will see lots of R&S spectrum analyzers and the like.
It's technically not a signal generator but an Arbitrary Waveform Generator. It provides a base-band modulated signal to the signal generator which the sig gen upconverts to the RF carrier. If you purchased a Vector Signal Generator this would be built in but you can still buy them stand alone and they are pretty common. NI, R&S, Tek, and Keysight all have product lines of them.
Another way to think about it, this would be comparable to a signal generator in the same way that an oscilloscope is comparable to a spectrum analyzer.
What you call a signal generator is what most would call an RF signal generator. Both an AWG and an RF signal generator belong to the generic signal generator family. :-)
But LLMs are that stupid. Do you remember that guy who vibe coded a cheating tool for interviews and who literally leaked all his api keys/secrets to GitHub because neither him nor a LLM didn't know better?
Is that the same guy who had his degree revoked for creating a cheating tool for interviews and is now a millionaire for creating a cheating tool for interviews?
Could be. Somewhere else in these comments someone was saying they found evidence that the app was coded that way.
But they also said it was a project by two students. And I could absolutely see students (or even normal developers) who aren’t used to thinking about security make that mistake. It is a very obvious way to implement it.
In retrospect I know that my senior project had some giant security issues. There were more things to look out for than I knew about at that time.
One reason I could think of is that they may return the database (or cache, or something else) response after generating and storing the OTP. Quick POCs/MVPs often use their storage models for API responses to save time, and then it is an easy oversight...
Save a HTTP request, and faster UX! What's not to love?
When Pinterest's new API was released, they were spewing out everything about a user to any app using their OAuth integration, including their 2FA secrets. We reported and got a bounty, but this sort of shit winds up in big companies' APIs, who really should know better.
It appears that the OTP is sent from "the response from triggering the one-time password".
I suspect it's a framework thing; they're probably directly serializing an object that's put in the database (ORM or other storage system) to what's returned via HTTP.
Oversight. Frameworks tend to make it easy to make an API endpoint by casting your model to JSON or something, but it's easy to forget you need to make specific fields hidden.
I assume that whoever wrote it just has absolutely no mental model of security, has never been on the attacking side or realised that clients can't be trusted, and only implemented the OTP authentication because they were "going through the motions" that they'd seen other people implement.
Everyone that programs should take blackhat classes of some kind. I talk to so many programmers that really don't understand what hackers/attackers can actually do.
My best guess would be some form of testing before they added sending the "sending a message" part to the API. Build the OTP logic, the scaffolding... and add a way to make sure it returns what you expect. But yes absolutely wild.
There is a word for this. We call it risk.
reply