In a multisig interaction there are 3 ways to get hacked:
- The multisig smart contract is owned
- The computer you're signing on is owned
- The hardware wallet (ledger, trezor) you're using is owned
The multisig contract in question here (Gnosis Safe) has shown to be incredibly robust, and hardware wallets are very difficult to attack, so the current weak point is the computer.
Cryptocurrency companies need to start solving this by moving to a more locked-down, dedicated machine for signing, as well as actually verifying what is shown on the tiny hardware wallet screen instead of blindly clicking "yes".
I think this shows that the best option or protection is to just send many small transactions, never a big one. Define some max tolerance for loss and then send that. This is the advantage of XRP. instant and very cheap TX. You can just automate many small transactions. If something goes wrong, then you can overhaul everything before losing all your $.
Why should it go online at all? $1.5 billion buys a lot of plane tickets to the same physical place, and how frequently do they need to be accessing the whole lump, anyway?
For that matter, I know signatures are long and human-unfriendly, but isn’t it on the order of a couple hundred bytes? Surely $1.5 billion buys transcribing the putative signature request into an isolated machine in a known state, validating/interpreting/displaying the request’s meaning on that offline machine, performing your signing there offline, copying down the result, and carrying the attestation to your secret conclave lair to combine with the others’ or whatever?
What you should do is sign the transaction on an offline computer (which is booted from a linux OS on a flash drive with only the essential software), simulate the transaction to verify it does what you expect, and then save the signed transaction to a flash drive. Then you can submit your transaction on a connected computer with confidence that you didn't sign your tokens away to someone else.
No, the computers were pre infected. If they used airgapped systems only to sign there would be almost no vector to from other than some major zero click zero day stuff, in that case everyone is screwed
> attackers stole approximately $1.5B from their multisig cold storage wallet. At this time, it appears the attackers compromised multiple signers’ devices, manipulated what signers saw in their wallet interface, and collected the required signatures while the signers believed they were conducting routine transactions.
If hackers can get remote access and 'manipulate what signers saw in their wallet interface' that doesn't sound like cold storage to me.
My understanding of "cold storage" was always that they keys are not accessible to the internet. That could be stored on paper, a flash drive or engraved in metal and put in a safe, or it could be in a regular digital wallet on a device never connected to the internet. If you want to do transactions, put it on an airgapped device, create the transaction, then move the transaction to an internet-connected device to broadcast the transaction.
Cold storage means the coins are stored offline. If the offline computer has malware, it is possible to tamper with the transaction data at the offline stage. Cold storage means signing the transaction offline and then broadcasting it on the online computer. if both are tampered then in theory this is possible by both computers showing erroneous data (where the offline computer tampers with the transaction by signing off to the wrong recipient but showing the correct one). This is hard to pull off as both computers need to be infected. This can be prevented by the super-paranoid by using a 3rd computer e.g. a VPS or sending small amounts.
it is possible to infect the offline computer by infecting a USB drive with stealth malware which then propagates to the offline one.
It could also be an inside job in exchange for an employee getting a kickback from N. Korea . it's not like this has not happened in the past. Imagine being a low-paid employee at an exchange and being enticed by an offer for tens of millions by North Korea to pretend to be hacked and infect one's own computers with the malware supplied by North Korea. This would be easy for an employee to do, who has access to the computers, and then pass it off as a hack.
There is no concept of "coin storage" in the actual security model of cryptocurrency. The security model of cryptocurrency is about the storage of keys.
"Cold storage" has come to mean that the keys are stored in some offline location. It doesn't necessarily mean that the keys are hard to access or that the money being moved is otherwise hard to get to. That is used to be what it means, but practically, a wallet on a hardware keychain is called "cold" exactly the way a wallet whose keys are split up on slips of paper between 5 different physical vaults is "cold."
Usually you want to boot from a cryptographic-ally verified medium where a checksum can be verified before you execute the system.
The emphasis is on running the correct software. If you have to input cryptographic data every time you boot that's okay because you're offline and should be in a secure room (no internet connected devices).
But yeah, malware attack is still possible if you don't have a secure chain and that's a long one.
The online security world is so wild. In pretty much any other field of engineering, foreign nation states explicitly targeting the thing you built is just kinda out of scope. There's no skyscraper in existence that is designed to withstand sustained artillery shelling, and your car is not going to withstand a tank shell either. Neither do they have to be designed to that specification. If North Korea killed someone with a missile or even destroyed a minor building or something, there would be public outrage and swift (military) repercussions.
But online, it's the wild wild west. The North Koreans can throw anything they want at your systems and the main response you get is "lol get good noob, should have built more secure systems" despite the opposing side literally having quite literally hundreds of people specifically trained to take on organisations like yours.
Not saying the Bybit people couldn't have been more careful of whatever, but let's appreciate how wild the online environment actually is sometimes.
Factories are not designed to withstand sustained aerial bombardment because the chance of sustained aerial bombardment is small to non-existent due to effective (geopolitical) mitigations.
But, if you are in a active war and being actively bombed, then you absolutely design your factories to be resistant to sustained aerial bombardment. You do not just throw your hands up in the air and say: “Who could have expected this totally routine and expected situation in our operational environment? We can not be blamed for not adequately mitigating known risks and intentionally mischaracterizing our risk mitigations as adequate for commonplace risks we know we can not adequately mitigate.”
If there were effective geopolitical mitigations that made the chances of a attack minimal, then your argument holds weight. But, that is not the case. Failure to accommodate for known, standard, commonplace failure modes is incompetence. Deceptively implying you do mitigate risks while lying or with a disregard for the truth is fraud and maliciousness.
There is also a second problem with your argument which is the relative accessibility of executing these attacks being trivial compared to military operations; being easily within the reach of lone individuals, let alone groups, organized crime, or entire governments. They require 10,000% security improvements to actually stop commonplace and routine attacks. But that is a longer argument I am not going to get into right now since the qualitative argument I made above applies regardless of the quantitative difficulty.
>But, if you are in a active war and being actively bombed, then you absolutely design your factories to be resistant to sustained aerial bombardment.
That's not really a viable strategy. It has been tried a few times - Mittelwerk and Kőbánya spring to mind - but you can't really build a self-contained factory. If your enemy can't bomb the factory, they'll bomb the roads and railways serving your factory, they'll bomb the worker housing, they'll bomb the less-sensitive factories that supply your factory with raw materials and components. You very quickly run into the diseconomies of operating under siege conditions.
At least during WWII, it was generally far more effective to rely on camouflage, secrecy and redundancy. Rather than having a super-fortified factory that shouts "this is vital national infrastructure", spread your capacity out into lots of mundane-looking facilities and plan for a certain level of attrition. Compartmentalise information to prevent your enemy from mapping out your supply chain and identifying bottlenecks. Your overall system can be highly resilient, even if the individual parts of that system are fragile.
All of those are methods to be resistant to aerial bombardment in my book.
If nobody knows where your factory is, it looks like a parking lot from the air and you have multiple smaller factories instead of one big factory to mitigate the impact of a damage event you are resistant to aerial bombardment, even if your ceiling isn't any sturdier than a normal factory roof. Same if the factory is out in the open but everybody thinks your drone factory produces windshield wipers
Yes, I am aware. I was using “resistant to sustained aerial bombardment” in the general sense of all classes of mitigations, not just fortification.
But thank you for elaborating when I was too lazy to. It helps further reinforce my point that the key is mitigating the risk however you can, not specific risk mitigations somehow absolving responsibility.
> they'll bomb the worker housing, they'll bomb the less-sensitive factories that supply your factory with raw materials and components. You very quickly run into the diseconomies of operating under siege conditions.
All true, and German WW2 production kept increasing despite the bombing.
Sure, if there was an active war going on. But while NK and the USA are not exactly friendly, they're definitely not at war either. In basically any other field, the question of "what do we do when a nation state deploys hundreds of people, well funded and well trained, specifically to screw us over?" is met with some variant of "that's why we pay taxes, so the army can protect us from that".
A normal bank being robbed for 1.5 billion, ESPECIALLY by a pariah country like North Korea, would absolutely not be met with "oh that was definitely your own fault" as many of the sibling comments seem to imply.
According to that page, the global reaction was to block most ($850M) of the fraudulent payments, recover a third of the remainder, add additional security to the SWIFT network and raise standards for banks, and push for penalties for the criminals who participated. That seems like more than a shrug.
And we are likely to see the same response here. Those coins are easily tracked, so the attacker is going to be lucky to get 25% of the value by selling them to someone prepared to take the risk of laundering them.
A crypto exchange does not have the same political influence as a national bank, which makes the situations very different, aside from all crypto stuff.
Say what you want about CBDCs, but they would fix this specific failure mode of digital assets where an enemy nation-state can steal $1.5 billion worth of the token.
I’d also add the number of cases where people holding Bitcoin are being threatened/tortured into transferring it. One less appreciated benefit of a system with reversible transactions is that it makes it significantly harder to do something like that.
It is not about “active war”. It is about mitigating known, routine risks. You are confusing a description of the problem and a description of the solution.
Routine harmful cyberattacks is a problem. You do not get to abdicate responsibility because it is too hard. If you can not handle the operational environment, then do not operate in it.
Maybe the solution is “go to war due to cyberattacks”, but that is not happening right now so their systems are inadequate for the expected operational environment (i.e. incompetent). And everybody knows this is the operational environment, everybody knows they can not deal with expected problems, and everybody does not adequately inform their customers because it would be detrimental to their bottom line.
As you say, it's weird. There absolutely is an all out war going on online. They attack us and we presumably throw just as much at them.
The chief US adversaries have the advantage of national firewalls, and less of their crucial infrastructure is online, so it is perhaps less effective against them. Or for all I know they are subject to equivalent thefts every day and just keep it out of the news.
No one said "they didn't need to defend", or at least that's not how I read OP. The observation is merely that the situation is so wildly different from the physically local world. It's remarkable.
I suspect the truth lies between the two - we are under constant attack, but we aren’t as a society reacting as if we were.
It’s like a building occasionally gets hit by a shell and we dont get on a war footing.
The closest analogy I can come up with is England in the 1600s and early 1700s. Fairly regularly ships would be attacked by pirates from North Africa, and sometimes an actual land raid woukd occur- pirates from North Africa would take slaves from small seaside towns.
It was not till Englands navy grew strong enough that the threat was eliminated - and perhaps that’s the real issue here - we know it’s happening, we cannot turn the Wild West into urban peace, so we just have to keep taking the licks and keep building more secure and stronger
> The closest analogy I can come up with is England in the 1600s and early 1700s.
I like your point, but that it a hell of an analogy. 1600 is when they formed the East India company, which was basically a state sponsored bunch of pirates, looting the wider world with its hundreds of thousands of soldiers.
https://en.m.wikipedia.org/wiki/East_India_Company
It is not about war footing it is about mitigating known environmental hazards. This can be done geopolitically, collectively, technologically, etc. but the point is that you need to mitigate or accommodate the known, routine risks.
It is silly to point to situations where the risks were mitigated as evidence that you do not need to mitigate the risks as the person I was responding to did. You can do that to argue that we need to mitigate the risks in a different manner, but not to argue that you can not be blamed for not mitigating the risks.
And to examples from history, we could look to Israel’s anti-rocket defenses as an example handling occasional shelling. Ancient castles and walls as an example to handle stray bandits, mercenaries, and armies. Private merchant naval vessels of the 1600s who routinely had their own cannons. Armored compounds and communities in areas with high crime. Armored trains and trucks. This is standard practice. We just figured out more effective and cheaper collective mitigations. But until that happens, you need to handle it yourself or you are incompetent.
What has changed is that there is an digital (as opposed to gold) international form of money whose transactions cannot be reversed or stopped. Bybit and those holders of large crypto are operating with a fundamentally different threat model where its worthwhile for an attacker to invest millions of dollars of effort (for the Bybit payout even tens or hundreds of millions) attacking them. Everyone else just needs to worry about getting ransomed for a much smaller amount.
There's a long BBC podcast on Lazarus that touches on the spending.
The members are state sponsored and young/bright. Top 0.1℅ academic sorts. At one point, the BBC got access to a conversation with one of the hackers, and their only question was "how much do you get paid?" (the context was that the hacker thought they were talking to Someone else in the tech space)
Apparently they aren't paid very well at all. Far less than the average Western IT worker. Their lives are not luxurious either. They're in barracks style living quarters with strict schedules and travel. Presumably, the anonymous Lazarus hacker was putting out a probing question because they must have been ruminating about what life on the other side would be like, what they are really worth, etc.
That's part of the power of Lazarus, the ability to dedicate resources far in excess of what most expect due to their indentured servant hackers (the opportunity to join is presented as a gift, Which to some extent it is because it does come with the extremely rare opportunity to travel. Many of them are in China.)
It is essentially a financial institution handling billions of dollars. It is not the average website of your neighbourhood restaurant that got hacked or a scattershot ransomware attack. I would expect that for that scale nation state actors are not out of scope, even if it is usually about infiltration and IP/secrets theft than outright getting robbed.
Eh said financial institution chose a field of operation where certain risks are present by design. They make good profits because other institutions judge those risks too high.
It's the mob attacking a casino and making off with chips. That people keep valuing those chips is one of the mysteries of our days.
That’s a really good point. If a nation state bombed a private oil rig with $1.5B in damages all hell would break loose. But if it’s a cyber attack no one cares and we blame the victim.
I think it really boils down to plausible deniability, and the fact that it’s convenient for the governments on the receiving end to ignore the damages done to private citizens when there’s no physical harm and clear responsibility.
No president is going to bomb NK because they attacked a crypro exchange. Maybe they should, but it’s not something the public will support. So it’s easy to say “oh well we don’t really know for sure who did it” and call it a day. It’s our own fault.
I also agree that private citizens have a responsibility to secure ourselves, but where do you draw the line? If I don’t have an AA gun on my roof, am I responsible for enemy warplanes bombing my business? Isn’t this partially why I pay taxes?
Well, there's a couple of airliner shootdowns that kind of go in this category. MH17, PS752, AHY8243... That's at least $0.5B in damage plus many hundreds of civilian lives.
At a certain scale nation-state-level actors have to be part of your threat model, there's no excuse.
But yeah, it's quite baffling how in a couple years we seemingly went from stealing email addresses to credit cards to straight up billions of dollars.
If we can expect that everything shifts online eventually, where will this end? Clicked on the wrong link? Guess your house is gone... tough luck.
This is why it is dangerous to replace people and laws with code. With laws, you eventually get to talk to a human being who has leeway in interpreting the situation. With code, it just works the way it does, regardless of circumstances.
Cryptocurrencies avoid a central authority, but by doing that, they also avoid any possibility of human discretion, oversight, or recourse. There is no institution to appeal to, no customer service to call, and no regulator to enforce fairness.
It does feel, doesn’t it, that the cryptocurrency crowd seems mainly to comprise the kinds of actors who correctly anticipate that the legitimate banking sector—and most humans, if asked—will say “no” to them…
Which I guess the idealists would say is part of the point: “first they came for the DPRK extortionists, and I said nothing,” etc.
Could conceivably, under different circumstances, say no. And are uncomfortable with that state of affairs.
Or to be snarky. Doesn't it seem that the crowd that gets up in arms about unlawful search and seizure are the sort of actors who correctly anticipate that the legitimate authorities would take issue with their behavior?
What concerns me is the idea that risks like these might leak into the regulated financial sector.
Right now, if I want to avoid my dollars being among those billions stolen, I can (and do) keep them someplace far away from instant digital currency. With firms that, while they could move large sums of their money somewhere else, build in a whole lot of friction in proportion to the amount being moved—by their customers, their staff, and their counterparties. Limited and well-understood modes of potential malfeasance, and strong structural discouragement for each of them.
There is nothing that I need to do that needs to move fast. But I’d hate for the firms servicing my slow, boring needs to be tempted by the new shiny.
> If North Korea killed someone with a missile or even destroyed a minor building or something, there would be public outrage and swift (military) repercussions.
Russia kills people in the West with nerve-gasses or Plutonium, cuts electrical and Internet cables, blows up ammunition factories or puts incendiary devices on cargo airplanes and there are no repercussions.
This post is light on the details of how the hack occurred. Given it talks about their toolkit, am I right to understand that people were tricked into downloading and running malicious software?
". At this time, it appears the attackers compromised multiple signers’ devices, manipulated what signers saw in their wallet interface, and collected the required signatures while the signers believed they were conducting routine transactions."
Depends how this plays out. If Bybit collapses due to this, yeah, lots of individual investors. Though history shows (MtGox, FTX) that eventually they’d be made at least partially whole.
If Bybit doesn’t collapse (can handle all the on-going withdrawals), then Bybit lost money that they’ll need to recoup through operations.
Currently it’s trending towards the second scenario.
My understanding is this multisig failed because, like most security, everyone just pressed yes and didn’t communicate, investigate, or ask questions, defeating the purpose of a multisig.
Yea, how is it that multiple people signed a transaction for over a billion dollars of assets without due diligence?
If you did this for non crypto there would be lawyers, bankers, etc involved in the transaction.
Root certificate authorities have already solved this problem with signing rituals which take place in person in an air gapped vault on specialized hardware and multiple parties as witness.
They didn't sign a transaction for 1 billion dollars.
They all signed what they thought was a routine transfer, but in reality what they signed gave the hacker full control of the smart contract (the Gnosis Safe) in which the 1.4B $ of tokens were stored.
The hackers, having gained control of the smart contract, proceeded to empty it of funds.
Separate keys (ie wallets) for routine small transactions versus the cold wallets used for huge sums. Perhaps I've misunderstood but it sounded like they performed a rare transaction while being led to believe it was a routine one. I'm wondering why you wouldn't split the infrastructure given the differences in risk.
The concept of strong safeties was not in place. Safeties refer to layers that go beyond common trust mechanisms. In this case, signing a transaction of that magnitude solely based on multi-signature approval was completely insufficient. There should have been additional safeguards, such as special approvals and extra verification steps, specifically designed for transactions within that amount range.
Indeed. As in, the organization should only sign such transactions when all signers are present in person in a secure location and they follow a procedure witnessed by independent auditors. “Work from home” when you control billion in value does not cut it.
They didn't sign a transaction for 1 billion dollars. They all signed what they thought was a routine transfer, but in reality what they signed gave the hacker full control of the smart contract (the Gnosis Safe) in which the 1.4B $ of tokens were stored.
The hackers, having gained control of the smart contract, proceeded to empty it of funds.
This was a multisig - meaning M out of N signatures from different signing devices were needed to sign a transaction. The attacker infected enough signer devices to go unnoticed and the signers failed to verify what they were signing on air-gapped devices
But they didn't know the amount because the UI showed them a different value, so if it's for 50ETH and you regular sign tx for 100-200ETH you may be a little less thorough.
If the setup you are using has the ability to perform large transactions then you must verify all transactions regardless of size as though they are large.
It's a security domain issue. A highly secure system involves highly secure controls. Bypassing those controls for lower risk activities will typically reduce the security of the entire system. You need an entirely independent low or medium risk system.
The software development practices of banks are probably a good example here.
I hate how complexity has become the norm in the industry. Instead of having simple systems with code and modules that are simple, fit-for-purpose and fully auditable, the approach has been to have insanely complex systems and then to add some even more complex security solution on top like CrowdStrike. Seems like a bandaid patch.
Multi-sig means multiple signatures, by multiple private keys. Nothing about that means that they have to be by multiple people - this isn’t secure like a bank - or that they aren’t vulnerable to the same attack.
Sure, but I mention it because it’s not a 1:1 mapping and if they aren’t rigorously auditing their behaviour it wouldn’t exactly be unheard of for people to know coworkers passwords or, more likely, for most of them to just trust someone saying it’s legit. If the tweet about it being a smart contract update is accurate, it’d be especially plausible that people shirked their responsibility and just approved it without review. The multiple part really doesn’t help enough if people aren’t independently verifying requests.
The main takeaway I have is that their “cold” wallet wasn’t very cold and they’d messed up a lot of their diligence, so I’d also read any statements from them as the products of damage control similar to how companies talk about “nation-state threat actors” trying to make it sound like you have to be the Mossad to exploit a Citrix patch which wasn’t installed for most of a year.
I think the article is clear that the attribution to NK was done by independent 3rd party blockchain researchers and not from ByBit.
The article is also pretty clear about the method that was used to compromise ByBit and how it has evolved from previous hacks on cryptocurrency exchanges.
Sometimes it really is a nation state actor, and whilst it may be a stretch to blame a threat actor of this level if your user data was stolen, this is 1.5bn in fungible cryptocurrency, just the sort of thing a pariah state requires and can launder with minimal risk of arrest or any judicial action really.
Unsure why the title says this era has arrived as if it's something new. As an internal penetration tester, I can attest it's already a disaster. The issue is that companies live and die by the cope that social engineering is a high bar or that if a vulnerability isn't internet facing, it's not a big deal.
The point of the article seems to be that it used to be bugs and raw incompetence, and now it's graduated to insufficient OpSec.
Significant progress for crypto.
The other side of this coin is all the companies and infrastructure that has popped up, which intentionally or not enables the laundering of ill-gotten cryptocurrency [1].
I have a hard time feeling sympathy here because I consider cryptocurrency to be fundamentally silly. Reversible transactions of fiat currency transactions is a feature not a bug.
I feel like securing something like this is practically impossible. There's always the risk of a bad actor who introduces malware for a small fee.
it can still be still hard to reverse fiat even if easier than crypto. try disputing a wire. this is why you should always use a credit card, preferably Amex, for purchases-tons of buyer protection.
Reversibility is a trade-off. It's great if you are on the sending end of a transaction. It can be a nightmare on the receiving end. Irreversibility is the other way around. And both approaches have different costs and assumptions.
I think it’s less about reversibility itself and more the larger system within which it works. Banking works because the companies agree to follow rules so there’s a social context where if I make a mistake you will help fix it because the odds are fair that you will make a mistake at some point, too. In contrast, cryptocurrency is a political movement so the ideological “trust less” purity test matters more than whether the system is actually used. There is no technical reason why a system couldn’t have something like a settlement period to allow fraud reversal.
A settlement period wouldn't even run against the ideology, only the convenience factor (and implementation complexity, and perhaps transaction fees). More generally, I think a number of the issues with crypto are rooted in things happening immediately.
The ideology I was referring to was more of the trust-less design and “be your own bank” philosophy: many of these problems become easier if you have a third party who can do things like reverse transactions, but then you’re not getting rid of banks and are acknowledging that governments have power over the system. They do anyway, but there’s been a lot of desire to say otherwise.
An algorithmically enforced settlement period where the final result of the entire transaction is visible on the chain but reversible by either party doesn't seem like it would run against that ideology.
No, but it’s a lot more work and it undercuts the marketing claims about being faster. If the industry grows up, I’d expect to see things like that happen.
Opt-in wouldn't affect speed. If mandatory, make it log10( thousands ) hours. I'm sure you can afford to wait 4 hours for a million dollar transfer to clear. Bybit would have had 7 hours to realize and revert the mistake in that case.
Adjusting your comment for the situation:
> Is $100.00 in cash silly? It has the same property (non-reversibility)
No, not silly if that's what I am comfortable to keep on me (wallet, mattress, etc) and I'm mugged/robbed most people will recover. (Especially if you're also able to afford the inherent risk of crypto.)
> Is $1,500,000,000.00 in cash silly? It has the same property (non-reversibility)
YES! And probably a challenge for most humans even if you're able to get that cash in the limited US $100,000.00 bill [1] - that's 15,000 green slips of paper. (I'm making a bold assumption that this link [2] is reasonably actually for the physical scale, though this apparently only shows 13,000 not the 15,000 needed.)
They effectively treated the $1.5B like a pile of cash in a fence with a few (easily pickable apparently) locks keeping it shut.
That SHOULD have been in a 100% offline, air gapped system with multiple levels of 2+ person approvals to access.
But this failure implies to me that even THEY didn't really consider the crypto assets they were holding as something with a real value either.
I just want to piggyback off this and discuss the scale in terms of the largest most readily accessible bill, the $100 bill. The relevant parts are [1]:
- Height: 66.3mm
- Width: 156mm
- Thickness: 0.0043 inches = 0.11mm
- Weight: 1.0g
So the volume is 1138mm3. You need 15M notes so that's just over 17 cubic meters or approximately 603 cubic feet, which is a cube roughly 2.6 meters (8.5 feet) on each side, weighing in at 15 metric tons or 33,000 pounds. Put another way, that's over half the volume of a standard twenty foot shipping container (~1100 cubic feet).
But let's get it more compact. The current gold price seems to be about $2939 per Troy ounce, which is 31.1035g. You need 510,378 Troy ounces, which is actually heavier at 15.87 metric tons but way more compact. Given a density of 19.32g/cm3 that's 822,000cm3 or 0.822 cubic meters or 29 cubic feet.
Whatever the case, it's a lot less practical to steal.
Orders of magnitude matter, and you have to look at the overall system. You can’t move $1.5B in cash without a fleet of trucks and a lot of time, and serious banking has lots of safeguards around it to prevent thefts by requiring more people to cooperate on an insider theft.
Cryptocurrency was designed as a political statement rather than a serious banking system so you effectively have the same level of precaution for both large and small amounts, akin to a bank keeping a billion dollars in the teller’s tray.
Taking a step back from this attack, it looks like the new crypto-reality is far far far immature security-wise & compliance-wise ("compliance to what??" you can ask me).
While it is nearly impossible to steal $100mn from one of the mega-banks, those <expletive> crypto bros, a bunch of failed morons (self-proven by all these hacks), manage to lose people's money. Now.. I am not defending the banking system (and its ethics/morals), but damn-it they do a f-a-r better job at IT Audit/IT Compliance/IT Sec (my bread and b utter for decades).
Being in the thick of it, I can tell you the compliance side is pushing towards what exists in traditional finance, be IT, money laundering, accounting practices etc. At least in Europe and to a lesser extent the US. If you go working at new banks (say Revolut or N26) or at growing asset-managing crypto companies in Europe you'll find the landscape to be extremely similar.
As far as I'm concerned, if you're parking money with a company based in an area that has lax regulation you're holding the gun that'll shoot your foot.
I have a hard time seeing something like this happen at Bitpanda or Kraken, though you never know.
The difference is that conventional banks can roll back transactions. The normal banking system is essentially a consensus mechanism "A: I owe you this amount. A: I just transferred you this amount, ok? B: Yup, accepted, thanks." If something goes wrong, A can say "A: Woops, I made a mistake. Reverse please, here are the laws stating in this case I have the right. B: Alright, I must comply.". In cryptocurrencies, by design, "the code is law". And this law does not predict reversing transactions. So you can lose any amount of currency due to an illegal act or even some simple error, like transferring to a dead address.
> those <expletive> crypto bros, a bunch of failed morons (self-proven by all these hacks)
Bankers are a bunch of idiots, too. I know this to be true because that one investment bank collapsed a bunch of years ago.
In all seriousness though, ETH is just a commodity; a bearer instrument; a thing. It's similar to gold or cash in some ways. If you store it properly, you're fine. If you give it to someone untrustworthy who loses it, of course that's a problem.
Well-regulated banks can start holding crypto on behalf of customers as soon as they're given the regulatory go-ahead. They've stored gold in vaults for thousands of years; they can store crypto in digital vaults too.
I’d be shit scared of a trad-fi institution holding crypto. I doubt they have the operational muscle, instinct, and know-how to properly safeguard it. Unless they partner with someone who does, which is what they’d likely do.
> Unless they partner with someone who does, which is what they’d likely do.
They're already doing it. Most crypto or crypto-adjacent product you'll see traditional firms is relying on a provider white-labelling crypto exposure.
I'm not sure how you'd do compliance, though. At least not universally. You could (which I suppose is your point) implement compliance requirements for crypto companies operating facilities on your soil. That doesn't really do anything for decentralized systems though
Compliance has a centralizing effect, for example the American OFAC sanctions list.
You can do business outside of it but you're cutting yourself out of a lot of institutional money.
In the end while there's a lot of money being made in sanctions-evasion, money-laundering and whatnot, at the macro level the industry prefers trying to cozy up to Blackrock and Vanguard than to narcos.
Recent and related: Bybit loses $1.5B in hack - https://news.ycombinator.com/item?id=43130143
In a multisig interaction there are 3 ways to get hacked:
- The multisig smart contract is owned
- The computer you're signing on is owned
- The hardware wallet (ledger, trezor) you're using is owned
The multisig contract in question here (Gnosis Safe) has shown to be incredibly robust, and hardware wallets are very difficult to attack, so the current weak point is the computer.
Cryptocurrency companies need to start solving this by moving to a more locked-down, dedicated machine for signing, as well as actually verifying what is shown on the tiny hardware wallet screen instead of blindly clicking "yes".
I think this shows that the best option or protection is to just send many small transactions, never a big one. Define some max tolerance for loss and then send that. This is the advantage of XRP. instant and very cheap TX. You can just automate many small transactions. If something goes wrong, then you can overhaul everything before losing all your $.
The missing part is that you cannot apply the same procedure to 1 ETH as you would to 1k ETH, regardless of the technology being used.
They should only use a computer that is air gapped to go online only when signing something. This is an op sec failure to not have this procedure
Why should it go online at all? $1.5 billion buys a lot of plane tickets to the same physical place, and how frequently do they need to be accessing the whole lump, anyway?
For that matter, I know signatures are long and human-unfriendly, but isn’t it on the order of a couple hundred bytes? Surely $1.5 billion buys transcribing the putative signature request into an isolated machine in a known state, validating/interpreting/displaying the request’s meaning on that offline machine, performing your signing there offline, copying down the result, and carrying the attestation to your secret conclave lair to combine with the others’ or whatever?
What you should do is sign the transaction on an offline computer (which is booted from a linux OS on a flash drive with only the essential software), simulate the transaction to verify it does what you expect, and then save the signed transaction to a flash drive. Then you can submit your transaction on a connected computer with confidence that you didn't sign your tokens away to someone else.
That’s precisely what happened in this attack.
They were attacked when they went online
No, the computers were pre infected. If they used airgapped systems only to sign there would be almost no vector to from other than some major zero click zero day stuff, in that case everyone is screwed
Or, you know, employ technology that allows for mistakes to be fixed.
> attackers stole approximately $1.5B from their multisig cold storage wallet. At this time, it appears the attackers compromised multiple signers’ devices, manipulated what signers saw in their wallet interface, and collected the required signatures while the signers believed they were conducting routine transactions.
If hackers can get remote access and 'manipulate what signers saw in their wallet interface' that doesn't sound like cold storage to me.
Isn't cold storage about where the keys are? You still need to be able to actually interact with a chain.
My understanding of "cold storage" was always that they keys are not accessible to the internet. That could be stored on paper, a flash drive or engraved in metal and put in a safe, or it could be in a regular digital wallet on a device never connected to the internet. If you want to do transactions, put it on an airgapped device, create the transaction, then move the transaction to an internet-connected device to broadcast the transaction.
Ditto.
The internet is adversarial, a cold wallet should only be reachable by a wrench attack.
Cold storage means the coins are stored offline. If the offline computer has malware, it is possible to tamper with the transaction data at the offline stage. Cold storage means signing the transaction offline and then broadcasting it on the online computer. if both are tampered then in theory this is possible by both computers showing erroneous data (where the offline computer tampers with the transaction by signing off to the wrong recipient but showing the correct one). This is hard to pull off as both computers need to be infected. This can be prevented by the super-paranoid by using a 3rd computer e.g. a VPS or sending small amounts.
it is possible to infect the offline computer by infecting a USB drive with stealth malware which then propagates to the offline one.
It could also be an inside job in exchange for an employee getting a kickback from N. Korea . it's not like this has not happened in the past. Imagine being a low-paid employee at an exchange and being enticed by an offer for tens of millions by North Korea to pretend to be hacked and infect one's own computers with the malware supplied by North Korea. This would be easy for an employee to do, who has access to the computers, and then pass it off as a hack.
There is no concept of "coin storage" in the actual security model of cryptocurrency. The security model of cryptocurrency is about the storage of keys.
"Cold storage" has come to mean that the keys are stored in some offline location. It doesn't necessarily mean that the keys are hard to access or that the money being moved is otherwise hard to get to. That is used to be what it means, but practically, a wallet on a hardware keychain is called "cold" exactly the way a wallet whose keys are split up on slips of paper between 5 different physical vaults is "cold."
Usually you want to boot from a cryptographic-ally verified medium where a checksum can be verified before you execute the system.
The emphasis is on running the correct software. If you have to input cryptographic data every time you boot that's okay because you're offline and should be in a secure room (no internet connected devices).
But yeah, malware attack is still possible if you don't have a secure chain and that's a long one.
Stuxnet managed to infect air-gapped computers.
Yeah, it sounds like an attack on the Metamask extension, or the browser hosting it.
not at all
The online security world is so wild. In pretty much any other field of engineering, foreign nation states explicitly targeting the thing you built is just kinda out of scope. There's no skyscraper in existence that is designed to withstand sustained artillery shelling, and your car is not going to withstand a tank shell either. Neither do they have to be designed to that specification. If North Korea killed someone with a missile or even destroyed a minor building or something, there would be public outrage and swift (military) repercussions.
But online, it's the wild wild west. The North Koreans can throw anything they want at your systems and the main response you get is "lol get good noob, should have built more secure systems" despite the opposing side literally having quite literally hundreds of people specifically trained to take on organisations like yours.
Not saying the Bybit people couldn't have been more careful of whatever, but let's appreciate how wild the online environment actually is sometimes.
Your logic is backwards.
Factories are not designed to withstand sustained aerial bombardment because the chance of sustained aerial bombardment is small to non-existent due to effective (geopolitical) mitigations.
But, if you are in a active war and being actively bombed, then you absolutely design your factories to be resistant to sustained aerial bombardment. You do not just throw your hands up in the air and say: “Who could have expected this totally routine and expected situation in our operational environment? We can not be blamed for not adequately mitigating known risks and intentionally mischaracterizing our risk mitigations as adequate for commonplace risks we know we can not adequately mitigate.”
If there were effective geopolitical mitigations that made the chances of a attack minimal, then your argument holds weight. But, that is not the case. Failure to accommodate for known, standard, commonplace failure modes is incompetence. Deceptively implying you do mitigate risks while lying or with a disregard for the truth is fraud and maliciousness.
There is also a second problem with your argument which is the relative accessibility of executing these attacks being trivial compared to military operations; being easily within the reach of lone individuals, let alone groups, organized crime, or entire governments. They require 10,000% security improvements to actually stop commonplace and routine attacks. But that is a longer argument I am not going to get into right now since the qualitative argument I made above applies regardless of the quantitative difficulty.
>But, if you are in a active war and being actively bombed, then you absolutely design your factories to be resistant to sustained aerial bombardment.
That's not really a viable strategy. It has been tried a few times - Mittelwerk and Kőbánya spring to mind - but you can't really build a self-contained factory. If your enemy can't bomb the factory, they'll bomb the roads and railways serving your factory, they'll bomb the worker housing, they'll bomb the less-sensitive factories that supply your factory with raw materials and components. You very quickly run into the diseconomies of operating under siege conditions.
At least during WWII, it was generally far more effective to rely on camouflage, secrecy and redundancy. Rather than having a super-fortified factory that shouts "this is vital national infrastructure", spread your capacity out into lots of mundane-looking facilities and plan for a certain level of attrition. Compartmentalise information to prevent your enemy from mapping out your supply chain and identifying bottlenecks. Your overall system can be highly resilient, even if the individual parts of that system are fragile.
All of those are methods to be resistant to aerial bombardment in my book.
If nobody knows where your factory is, it looks like a parking lot from the air and you have multiple smaller factories instead of one big factory to mitigate the impact of a damage event you are resistant to aerial bombardment, even if your ceiling isn't any sturdier than a normal factory roof. Same if the factory is out in the open but everybody thinks your drone factory produces windshield wipers
Yes, I am aware. I was using “resistant to sustained aerial bombardment” in the general sense of all classes of mitigations, not just fortification.
But thank you for elaborating when I was too lazy to. It helps further reinforce my point that the key is mitigating the risk however you can, not specific risk mitigations somehow absolving responsibility.
> they'll bomb the worker housing, they'll bomb the less-sensitive factories that supply your factory with raw materials and components. You very quickly run into the diseconomies of operating under siege conditions.
All true, and German WW2 production kept increasing despite the bombing.
Sure, if there was an active war going on. But while NK and the USA are not exactly friendly, they're definitely not at war either. In basically any other field, the question of "what do we do when a nation state deploys hundreds of people, well funded and well trained, specifically to screw us over?" is met with some variant of "that's why we pay taxes, so the army can protect us from that".
A normal bank being robbed for 1.5 billion, ESPECIALLY by a pariah country like North Korea, would absolutely not be met with "oh that was definitely your own fault" as many of the sibling comments seem to imply.
A normal bank was robbed of $1B back in 2016, likely by North Korea, and the global reaction was pretty much a collective shrug:
https://en.wikipedia.org/wiki/Bangladesh_Bank_robbery
According to that page, the global reaction was to block most ($850M) of the fraudulent payments, recover a third of the remainder, add additional security to the SWIFT network and raise standards for banks, and push for penalties for the criminals who participated. That seems like more than a shrug.
And we are likely to see the same response here. Those coins are easily tracked, so the attacker is going to be lucky to get 25% of the value by selling them to someone prepared to take the risk of laundering them.
A crypto exchange does not have the same political influence as a national bank, which makes the situations very different, aside from all crypto stuff.
That’s not a given, and it wouldn’t be the same in any case since the victim is still out 100% of the loss as opposed to 10-15%.
true, unless Lazarus group follow the path of Omar, then victim could be out only 75%.
Say what you want about CBDCs, but they would fix this specific failure mode of digital assets where an enemy nation-state can steal $1.5 billion worth of the token.
I’d also add the number of cases where people holding Bitcoin are being threatened/tortured into transferring it. One less appreciated benefit of a system with reversible transactions is that it makes it significantly harder to do something like that.
Actually we have been at war with North Korea continously since the 1950s, we only have a cease fire with them.
The Korean War ended with an armistice signed on July 27, 1953, which stopped active fighting but did not establish a formal peace treaty.
https://en.m.wikipedia.org/wiki/Korean_conflict
I know that soldiers stationed in South Korea get paid at the wartime rate.
Only another 300 years to beat the record between Netherlands and the Isles of Scilly.
Maybe the US and North Korea will sign a peace treaty in the 24th century. Captain Picard can mediate.
US and North Korea have never been at war though. The US (and other countries) assisted South Korea under the UN flag.
https://en.wikipedia.org/wiki/United_Nations_Command
It is not about “active war”. It is about mitigating known, routine risks. You are confusing a description of the problem and a description of the solution.
Routine harmful cyberattacks is a problem. You do not get to abdicate responsibility because it is too hard. If you can not handle the operational environment, then do not operate in it.
Maybe the solution is “go to war due to cyberattacks”, but that is not happening right now so their systems are inadequate for the expected operational environment (i.e. incompetent). And everybody knows this is the operational environment, everybody knows they can not deal with expected problems, and everybody does not adequately inform their customers because it would be detrimental to their bottom line.
As you say, it's weird. There absolutely is an all out war going on online. They attack us and we presumably throw just as much at them.
The chief US adversaries have the advantage of national firewalls, and less of their crucial infrastructure is online, so it is perhaps less effective against them. Or for all I know they are subject to equivalent thefts every day and just keep it out of the news.
No one said "they didn't need to defend", or at least that's not how I read OP. The observation is merely that the situation is so wildly different from the physically local world. It's remarkable.
I suspect the truth lies between the two - we are under constant attack, but we aren’t as a society reacting as if we were.
It’s like a building occasionally gets hit by a shell and we dont get on a war footing.
The closest analogy I can come up with is England in the 1600s and early 1700s. Fairly regularly ships would be attacked by pirates from North Africa, and sometimes an actual land raid woukd occur- pirates from North Africa would take slaves from small seaside towns.
It was not till Englands navy grew strong enough that the threat was eliminated - and perhaps that’s the real issue here - we know it’s happening, we cannot turn the Wild West into urban peace, so we just have to keep taking the licks and keep building more secure and stronger
> The closest analogy I can come up with is England in the 1600s and early 1700s.
I like your point, but that it a hell of an analogy. 1600 is when they formed the East India company, which was basically a state sponsored bunch of pirates, looting the wider world with its hundreds of thousands of soldiers. https://en.m.wikipedia.org/wiki/East_India_Company
It is not about war footing it is about mitigating known environmental hazards. This can be done geopolitically, collectively, technologically, etc. but the point is that you need to mitigate or accommodate the known, routine risks.
It is silly to point to situations where the risks were mitigated as evidence that you do not need to mitigate the risks as the person I was responding to did. You can do that to argue that we need to mitigate the risks in a different manner, but not to argue that you can not be blamed for not mitigating the risks.
And to examples from history, we could look to Israel’s anti-rocket defenses as an example handling occasional shelling. Ancient castles and walls as an example to handle stray bandits, mercenaries, and armies. Private merchant naval vessels of the 1600s who routinely had their own cannons. Armored compounds and communities in areas with high crime. Armored trains and trucks. This is standard practice. We just figured out more effective and cheaper collective mitigations. But until that happens, you need to handle it yourself or you are incompetent.
The state of online security hasn't changed much.
What has changed is that there is an digital (as opposed to gold) international form of money whose transactions cannot be reversed or stopped. Bybit and those holders of large crypto are operating with a fundamentally different threat model where its worthwhile for an attacker to invest millions of dollars of effort (for the Bybit payout even tens or hundreds of millions) attacking them. Everyone else just needs to worry about getting ransomed for a much smaller amount.
There's a long BBC podcast on Lazarus that touches on the spending.
The members are state sponsored and young/bright. Top 0.1℅ academic sorts. At one point, the BBC got access to a conversation with one of the hackers, and their only question was "how much do you get paid?" (the context was that the hacker thought they were talking to Someone else in the tech space)
Apparently they aren't paid very well at all. Far less than the average Western IT worker. Their lives are not luxurious either. They're in barracks style living quarters with strict schedules and travel. Presumably, the anonymous Lazarus hacker was putting out a probing question because they must have been ruminating about what life on the other side would be like, what they are really worth, etc.
That's part of the power of Lazarus, the ability to dedicate resources far in excess of what most expect due to their indentured servant hackers (the opportunity to join is presented as a gift, Which to some extent it is because it does come with the extremely rare opportunity to travel. Many of them are in China.)
It is essentially a financial institution handling billions of dollars. It is not the average website of your neighbourhood restaurant that got hacked or a scattershot ransomware attack. I would expect that for that scale nation state actors are not out of scope, even if it is usually about infiltration and IP/secrets theft than outright getting robbed.
Eh said financial institution chose a field of operation where certain risks are present by design. They make good profits because other institutions judge those risks too high.
It's the mob attacking a casino and making off with chips. That people keep valuing those chips is one of the mysteries of our days.
That’s a really good point. If a nation state bombed a private oil rig with $1.5B in damages all hell would break loose. But if it’s a cyber attack no one cares and we blame the victim.
I think it really boils down to plausible deniability, and the fact that it’s convenient for the governments on the receiving end to ignore the damages done to private citizens when there’s no physical harm and clear responsibility.
No president is going to bomb NK because they attacked a crypro exchange. Maybe they should, but it’s not something the public will support. So it’s easy to say “oh well we don’t really know for sure who did it” and call it a day. It’s our own fault.
I also agree that private citizens have a responsibility to secure ourselves, but where do you draw the line? If I don’t have an AA gun on my roof, am I responsible for enemy warplanes bombing my business? Isn’t this partially why I pay taxes?
Well, there's a couple of airliner shootdowns that kind of go in this category. MH17, PS752, AHY8243... That's at least $0.5B in damage plus many hundreds of civilian lives.
It seems more analogous to the Soviets infiltrating your small business. Which no small business owner is prepared to screen for, and which happened.
If your small business has $1.5billion in the safe then it’s not a “small business”
Especially if it’s $1.5bn of other people’s money
At a certain scale nation-state-level actors have to be part of your threat model, there's no excuse.
But yeah, it's quite baffling how in a couple years we seemingly went from stealing email addresses to credit cards to straight up billions of dollars.
If we can expect that everything shifts online eventually, where will this end? Clicked on the wrong link? Guess your house is gone... tough luck.
This is why it is dangerous to replace people and laws with code. With laws, you eventually get to talk to a human being who has leeway in interpreting the situation. With code, it just works the way it does, regardless of circumstances.
Cryptocurrencies avoid a central authority, but by doing that, they also avoid any possibility of human discretion, oversight, or recourse. There is no institution to appeal to, no customer service to call, and no regulator to enforce fairness.
It does feel, doesn’t it, that the cryptocurrency crowd seems mainly to comprise the kinds of actors who correctly anticipate that the legitimate banking sector—and most humans, if asked—will say “no” to them…
Which I guess the idealists would say is part of the point: “first they came for the DPRK extortionists, and I said nothing,” etc.
> correctly anticipate ... will say “no” to them
Could conceivably, under different circumstances, say no. And are uncomfortable with that state of affairs.
Or to be snarky. Doesn't it seem that the crowd that gets up in arms about unlawful search and seizure are the sort of actors who correctly anticipate that the legitimate authorities would take issue with their behavior?
What concerns me is the idea that risks like these might leak into the regulated financial sector.
Right now, if I want to avoid my dollars being among those billions stolen, I can (and do) keep them someplace far away from instant digital currency. With firms that, while they could move large sums of their money somewhere else, build in a whole lot of friction in proportion to the amount being moved—by their customers, their staff, and their counterparties. Limited and well-understood modes of potential malfeasance, and strong structural discouragement for each of them.
There is nothing that I need to do that needs to move fast. But I’d hate for the firms servicing my slow, boring needs to be tempted by the new shiny.
> If North Korea killed someone with a missile or even destroyed a minor building or something, there would be public outrage and swift (military) repercussions.
Russia kills people in the West with nerve-gasses or Plutonium, cuts electrical and Internet cables, blows up ammunition factories or puts incendiary devices on cargo airplanes and there are no repercussions.
This post is light on the details of how the hack occurred. Given it talks about their toolkit, am I right to understand that people were tricked into downloading and running malicious software?
Found the answer, yes https://x.com/0xcygaar/status/1892967062160511164
". At this time, it appears the attackers compromised multiple signers’ devices, manipulated what signers saw in their wallet interface, and collected the required signatures while the signers believed they were conducting routine transactions."
Does anyone know how many signers there were/are?
Remember when ETH hard forked over $50M stolen 9 years ago?
that was way more relative to size of network , and the hacker still got his forked tokens, so he did ok
Genuine question because I know almost nothing about crypto: who actually lost money in this attack? Lots of individuals?
Depends how this plays out. If Bybit collapses due to this, yeah, lots of individual investors. Though history shows (MtGox, FTX) that eventually they’d be made at least partially whole.
If Bybit doesn’t collapse (can handle all the on-going withdrawals), then Bybit lost money that they’ll need to recoup through operations.
Currently it’s trending towards the second scenario.
My understanding is this multisig failed because, like most security, everyone just pressed yes and didn’t communicate, investigate, or ask questions, defeating the purpose of a multisig.
Yea, how is it that multiple people signed a transaction for over a billion dollars of assets without due diligence?
If you did this for non crypto there would be lawyers, bankers, etc involved in the transaction.
Root certificate authorities have already solved this problem with signing rituals which take place in person in an air gapped vault on specialized hardware and multiple parties as witness.
They didn't sign a transaction for 1 billion dollars. They all signed what they thought was a routine transfer, but in reality what they signed gave the hacker full control of the smart contract (the Gnosis Safe) in which the 1.4B $ of tokens were stored.
The hackers, having gained control of the smart contract, proceeded to empty it of funds.
TFA seems to suggest that the thieves modified the signers’ applications to display a routine transaction but actually sign the heist transaction.
Given that the UI they saw was compromised, they likely believed they were signing some routine 1M rebalancing transaction.
Odd that you wouldn't use separate keys for that given the wildly different levels of risk involved.
Separate keys for what? They believed they were signing a routine transaction. That’s the whole idea of the hack.
Splitting funds over 100 wallets would’ve helped. A 100x lower amount would be lost.
And/Or having separate hardened devices used only for signing.
Separate keys (ie wallets) for routine small transactions versus the cold wallets used for huge sums. Perhaps I've misunderstood but it sounded like they performed a rare transaction while being led to believe it was a routine one. I'm wondering why you wouldn't split the infrastructure given the differences in risk.
The concept of strong safeties was not in place. Safeties refer to layers that go beyond common trust mechanisms. In this case, signing a transaction of that magnitude solely based on multi-signature approval was completely insufficient. There should have been additional safeguards, such as special approvals and extra verification steps, specifically designed for transactions within that amount range.
Indeed. As in, the organization should only sign such transactions when all signers are present in person in a secure location and they follow a procedure witnessed by independent auditors. “Work from home” when you control billion in value does not cut it.
They didn't sign a transaction for 1 billion dollars. They all signed what they thought was a routine transfer, but in reality what they signed gave the hacker full control of the smart contract (the Gnosis Safe) in which the 1.4B $ of tokens were stored. The hackers, having gained control of the smart contract, proceeded to empty it of funds.
The displayed information was tampered due to malware. communication would not have helped.
I really do not understand why they do not separate these into multiple separate wallets
They did.
This was a multisig - meaning M out of N signatures from different signing devices were needed to sign a transaction. The attacker infected enough signer devices to go unnoticed and the signers failed to verify what they were signing on air-gapped devices
> the signers failed to verify what they were signing on air-gapped devices
This is the part that really surprises me given the amount of money involved.
But they didn't know the amount because the UI showed them a different value, so if it's for 50ETH and you regular sign tx for 100-200ETH you may be a little less thorough.
If the setup you are using has the ability to perform large transactions then you must verify all transactions regardless of size as though they are large.
It's a security domain issue. A highly secure system involves highly secure controls. Bypassing those controls for lower risk activities will typically reduce the security of the entire system. You need an entirely independent low or medium risk system.
The software development practices of banks are probably a good example here.
They have other cold wallets, so I guess they do?
they did. this is why the exchange is still solvenet
I hate how complexity has become the norm in the industry. Instead of having simple systems with code and modules that are simple, fit-for-purpose and fully auditable, the approach has been to have insanely complex systems and then to add some even more complex security solution on top like CrowdStrike. Seems like a bandaid patch.
Wild to think that North Korea could assign whole teams of people working 24/7 to trick just one person into clicking a couple of buttons.
Not one person. The multi in mukti-sig means multiple.
Multi-sig means multiple signatures, by multiple private keys. Nothing about that means that they have to be by multiple people - this isn’t secure like a bank - or that they aren’t vulnerable to the same attack.
ok but in practice having multiple signatures but one signer is pointless, so multi-sig pretty much does mean multiple signers(people)
Sure, but I mention it because it’s not a 1:1 mapping and if they aren’t rigorously auditing their behaviour it wouldn’t exactly be unheard of for people to know coworkers passwords or, more likely, for most of them to just trust someone saying it’s legit. If the tweet about it being a smart contract update is accurate, it’d be especially plausible that people shirked their responsibility and just approved it without review. The multiple part really doesn’t help enough if people aren’t independently verifying requests.
The main takeaway I have is that their “cold” wallet wasn’t very cold and they’d messed up a lot of their diligence, so I’d also read any statements from them as the products of damage control similar to how companies talk about “nation-state threat actors” trying to make it sound like you have to be the Mossad to exploit a Citrix patch which wasn’t installed for most of a year.
I think the article is clear that the attribution to NK was done by independent 3rd party blockchain researchers and not from ByBit.
The article is also pretty clear about the method that was used to compromise ByBit and how it has evolved from previous hacks on cryptocurrency exchanges.
Sometimes it really is a nation state actor, and whilst it may be a stretch to blame a threat actor of this level if your user data was stolen, this is 1.5bn in fungible cryptocurrency, just the sort of thing a pariah state requires and can launder with minimal risk of arrest or any judicial action really.
Unsure why the title says this era has arrived as if it's something new. As an internal penetration tester, I can attest it's already a disaster. The issue is that companies live and die by the cope that social engineering is a high bar or that if a vulnerability isn't internet facing, it's not a big deal.
The point of the article seems to be that it used to be bugs and raw incompetence, and now it's graduated to insufficient OpSec. Significant progress for crypto.
We took the new era out of the title above.
The other side of this coin is all the companies and infrastructure that has popped up, which intentionally or not enables the laundering of ill-gotten cryptocurrency [1].
I have a hard time feeling sympathy here because I consider cryptocurrency to be fundamentally silly. Reversible transactions of fiat currency transactions is a feature not a bug.
I feel like securing something like this is practically impossible. There's always the risk of a bad actor who introduces malware for a small fee.
[1]: https://www.chainalysis.com/blog/2024-crypto-money-launderin...
Reversible transactions is a feature for fiat money
Reversible transactions would generally be a bug regarding cash & hard assets of which cryptocurrency is trying to imitate.
1.5 billion in cash would not disappear this easy. You would need trucks to even transport it.
Indeed. If you sell a suitcase of cocaine, you need to launder two suitcases of cash.
it can still be still hard to reverse fiat even if easier than crypto. try disputing a wire. this is why you should always use a credit card, preferably Amex, for purchases-tons of buyer protection.
Crypto has reversible transactions when both parties agree to use that functionality in advance (well, the reasonably programmable ones do, anyway)
It's not a bug if both parties give consent, which sounds like a wonderful way to transact, to me!
But, when you REALLY want reversibility is when the transaction is done without your consent — when stuff is stolen and you want it back.
Thieves will not tend to consent to reversible transactions.
On the contrary, thieves often use chargebacks to steal from small businesses.
Reversibility is a trade-off. It's great if you are on the sending end of a transaction. It can be a nightmare on the receiving end. Irreversibility is the other way around. And both approaches have different costs and assumptions.
I think it’s less about reversibility itself and more the larger system within which it works. Banking works because the companies agree to follow rules so there’s a social context where if I make a mistake you will help fix it because the odds are fair that you will make a mistake at some point, too. In contrast, cryptocurrency is a political movement so the ideological “trust less” purity test matters more than whether the system is actually used. There is no technical reason why a system couldn’t have something like a settlement period to allow fraud reversal.
A settlement period wouldn't even run against the ideology, only the convenience factor (and implementation complexity, and perhaps transaction fees). More generally, I think a number of the issues with crypto are rooted in things happening immediately.
The ideology I was referring to was more of the trust-less design and “be your own bank” philosophy: many of these problems become easier if you have a third party who can do things like reverse transactions, but then you’re not getting rid of banks and are acknowledging that governments have power over the system. They do anyway, but there’s been a lot of desire to say otherwise.
An algorithmically enforced settlement period where the final result of the entire transaction is visible on the chain but reversible by either party doesn't seem like it would run against that ideology.
No, but it’s a lot more work and it undercuts the marketing claims about being faster. If the industry grows up, I’d expect to see things like that happen.
Opt-in wouldn't affect speed. If mandatory, make it log10( thousands ) hours. I'm sure you can afford to wait 4 hours for a million dollar transfer to clear. Bybit would have had 7 hours to realize and revert the mistake in that case.
bank error in your favor
Is cash silly? It has the same property (non-reversibility)
> Is cash silly?
No, of course not.
Adjusting your comment for the situation: > Is $100.00 in cash silly? It has the same property (non-reversibility)
No, not silly if that's what I am comfortable to keep on me (wallet, mattress, etc) and I'm mugged/robbed most people will recover. (Especially if you're also able to afford the inherent risk of crypto.)
> Is $1,500,000,000.00 in cash silly? It has the same property (non-reversibility)
YES! And probably a challenge for most humans even if you're able to get that cash in the limited US $100,000.00 bill [1] - that's 15,000 green slips of paper. (I'm making a bold assumption that this link [2] is reasonably actually for the physical scale, though this apparently only shows 13,000 not the 15,000 needed.)
They effectively treated the $1.5B like a pile of cash in a fence with a few (easily pickable apparently) locks keeping it shut.
That SHOULD have been in a 100% offline, air gapped system with multiple levels of 2+ person approvals to access.
But this failure implies to me that even THEY didn't really consider the crypto assets they were holding as something with a real value either.
1- https://en.m.wikipedia.org/wiki/United_States_one-hundred-th...
2- https://www.reddit.com/r/pics/s/GHNABiJh6A
I just want to piggyback off this and discuss the scale in terms of the largest most readily accessible bill, the $100 bill. The relevant parts are [1]:
- Height: 66.3mm
- Width: 156mm
- Thickness: 0.0043 inches = 0.11mm
- Weight: 1.0g
So the volume is 1138mm3. You need 15M notes so that's just over 17 cubic meters or approximately 603 cubic feet, which is a cube roughly 2.6 meters (8.5 feet) on each side, weighing in at 15 metric tons or 33,000 pounds. Put another way, that's over half the volume of a standard twenty foot shipping container (~1100 cubic feet).
But let's get it more compact. The current gold price seems to be about $2939 per Troy ounce, which is 31.1035g. You need 510,378 Troy ounces, which is actually heavier at 15.87 metric tons but way more compact. Given a density of 19.32g/cm3 that's 822,000cm3 or 0.822 cubic meters or 29 cubic feet.
Whatever the case, it's a lot less practical to steal.
[1]: https://en.wikipedia.org/wiki/United_States_one-hundred-doll...
Orders of magnitude matter, and you have to look at the overall system. You can’t move $1.5B in cash without a fleet of trucks and a lot of time, and serious banking has lots of safeguards around it to prevent thefts by requiring more people to cooperate on an insider theft.
Cryptocurrency was designed as a political statement rather than a serious banking system so you effectively have the same level of precaution for both large and small amounts, akin to a bank keeping a billion dollars in the teller’s tray.
It’s also impossible to steal from afar and transactions of $100/$1000000/$1000000000 each look very different.
Crypto is like having a $1.5 billion bill.
Keeping $1.5 billion in cash is silly.
Taking a step back from this attack, it looks like the new crypto-reality is far far far immature security-wise & compliance-wise ("compliance to what??" you can ask me).
While it is nearly impossible to steal $100mn from one of the mega-banks, those <expletive> crypto bros, a bunch of failed morons (self-proven by all these hacks), manage to lose people's money. Now.. I am not defending the banking system (and its ethics/morals), but damn-it they do a f-a-r better job at IT Audit/IT Compliance/IT Sec (my bread and b utter for decades).
Being in the thick of it, I can tell you the compliance side is pushing towards what exists in traditional finance, be IT, money laundering, accounting practices etc. At least in Europe and to a lesser extent the US. If you go working at new banks (say Revolut or N26) or at growing asset-managing crypto companies in Europe you'll find the landscape to be extremely similar.
As far as I'm concerned, if you're parking money with a company based in an area that has lax regulation you're holding the gun that'll shoot your foot. I have a hard time seeing something like this happen at Bitpanda or Kraken, though you never know.
Classic “you get what you pay for”.
The difference is that conventional banks can roll back transactions. The normal banking system is essentially a consensus mechanism "A: I owe you this amount. A: I just transferred you this amount, ok? B: Yup, accepted, thanks." If something goes wrong, A can say "A: Woops, I made a mistake. Reverse please, here are the laws stating in this case I have the right. B: Alright, I must comply.". In cryptocurrencies, by design, "the code is law". And this law does not predict reversing transactions. So you can lose any amount of currency due to an illegal act or even some simple error, like transferring to a dead address.
> those <expletive> crypto bros, a bunch of failed morons (self-proven by all these hacks)
Bankers are a bunch of idiots, too. I know this to be true because that one investment bank collapsed a bunch of years ago.
In all seriousness though, ETH is just a commodity; a bearer instrument; a thing. It's similar to gold or cash in some ways. If you store it properly, you're fine. If you give it to someone untrustworthy who loses it, of course that's a problem.
Well-regulated banks can start holding crypto on behalf of customers as soon as they're given the regulatory go-ahead. They've stored gold in vaults for thousands of years; they can store crypto in digital vaults too.
I’d be shit scared of a trad-fi institution holding crypto. I doubt they have the operational muscle, instinct, and know-how to properly safeguard it. Unless they partner with someone who does, which is what they’d likely do.
> Unless they partner with someone who does, which is what they’d likely do.
They're already doing it. Most crypto or crypto-adjacent product you'll see traditional firms is relying on a provider white-labelling crypto exposure.
I'm not sure how you'd do compliance, though. At least not universally. You could (which I suppose is your point) implement compliance requirements for crypto companies operating facilities on your soil. That doesn't really do anything for decentralized systems though
Compliance has a centralizing effect, for example the American OFAC sanctions list. You can do business outside of it but you're cutting yourself out of a lot of institutional money. In the end while there's a lot of money being made in sanctions-evasion, money-laundering and whatnot, at the macro level the industry prefers trying to cozy up to Blackrock and Vanguard than to narcos.