This is the same problem I'm currently facing with WireGuard. No warning at all, no notification. One day I sign in to publish an update, and yikes, account suspended. Currently undergoing some sort of 60 days appeals process, but who knows. That's kind of crazy: what if there were some critical RCE in WireGuard, being exploited in the wild, and I needed to update users immediately? (That's just hypothetical; don't freak out!) In that case, Microsoft would have my hands entirely tied.
If anybody within Microsoft is able to do something, please contact me -- jason at zx2c4 dot com.
It has been clear for a while that certain providers and services need to be regulated as utilities - Microsoft, Google, Apple, Visa, Mastercard, and soon Openai and Anthropic.
It should be illegal for these companies, just like utilities, to deny service to anyone or any entity in good standing for dues.
There is little hope for getting this through in the US where most politicians of any stripe hate the public, and the ones that don't have hardly any power. But it might be possible to do this in the EU.
Then, we non-EU folks need to apply for Estonian e-residency [1] which may get us EU regulatory coverage.
It would not surprise me if these actions are coming at the requests of governments. Strong encryption is one of the few things that challenges their monopoly on information; they have a very strong incentive to apply political pressure to the maintainers of these projects to, well, stop maintaining the projects. We've seen this in overt actions that the EU takes; in more covert actions that the U.S. government is suspected of taking; and in the news headlines about third-world dictatorships that just shut off the Internet. Tech companies are perhaps the most convenient leverage point for these actions.
More regulation won't help here, because the regulation-maker is itself the hostile party.
What would help is full control over the supply chain. Hardware that you own, free and open-source operating systems where no single person is the bottleneck to distribution, and free software that again has no single person who is a failure point and no way to control its distribution.
VLayer (my project) scans healthcare codebases for HIPAA compliance issues before they reach production. One thing I learned building it: developers rarely think about encryption until it's too late. Tools like VeraCrypt solve the "data at rest" problem, but the bigger issue in healthcare software is unencrypted data in logs and API responses — stuff that's much harder to audit manually.
>More regulation won't help here, because the regulation-maker is itself the hostile party.
It's easy to paint the big gov as bad, but this is a case where unfortunately the populace seems to be in agreement with the big bad gov. While most US citizens support encryption, 76% or so, the vast majority 63% also favor government "backdoor" access for national security reasons.
I guess either we believe in democracy or we don't. It could be said that if Veracrypt isn't/can't be backdoor'd, perhaps the gov is simply implementing the will of the people :( via Microsoft.
What does democracy have to do with electronic encryption? Democracy existed before computers.
There are legitimate reasons for governments to intercept information, with the correct oversight -- enforced legally in an "checks and balances" manner. The fact that there is a breakdown of trust between government and people won't be solved with more encryption.
A core tenet of Truecrypt + Veracrypt (developer guarantee) has always been no backdoors, even if requested by government.
If in a democratic society, the majority agrees that government should have backdoors (with the correct oversight). Then it follows that Veracrypt should be illegal as its use is not in alignment with the will of the majority.
I personally don't agree with the majority here but can you fault the logic?
Most forms of democracy do not have a direct correspondence between "the will of the people" and the actual policies enacted. As another poster mentioned, tyranny of the majority is a thing, and robust democracies have evolved institutions to deal with it. Otherwise there's nothing stopping the majority from periodically voting the minority off the island, Survivor style, until only a single dictator remains.
In the U.S. in particular, there's strong respect for individual rights enshrined in the Constitution, and a key role of the judicial branch is ensuring that those rights are respected regardless of what the majority thinks. The majority cannot enslave the minority, for example, regardless of what the legislature votes. Nor can it deprive it of speech or free assembly, or guns, or a right to trial by jury.
Technofeudalism is what happens when grossly under-regulated anarcho-capitalism dominates rather than sustainable, more ordinary capitalism where government regulation is the supreme, minimized biased arbiter that keeps things fairer and sensible for the benefit of the many rather than the benefit of the few.
"In addition to the information referred to in paragraph 1, the controller shall, at the time when personal data are obtained, provide the data subject with the following further information necessary to ensure fair and transparent processing: the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject."
C‑634/21 is also somewhat relevant to understand how courts have applied ADM in general context of credit reporting https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A... though it didn't specify what information actually needs to provided for 13(2)(f).
I understand the sentiment, but.. do you realize how much more expensive that would make all these services?
I don’t know the number. But personally I think using the services and ‘simply’ only use them if the disappearance isn’t catastrophic and have the price be low or free while it works isn’t too bad a trade-off.
If this requirement was in place they would be a bit more careful about terminating accounts because the cost equation would incentivize it. Maybe they would be more careful in their automation or require more than one level of human review before cutting off access.
These companies are gatekeepers for their platform. It isn’t crazy to require them to act more responsibly.
These services are designed such that security sort of depends on reviewing the programs that are allowed to run. Microsoft, Google and Apple all do this. It adds expense, annoyance, limitations, and really very little security.
The contrasting approach, where one designs a platform that remains secure even if the owner is allowed to run whatever software they like, may be more complex but is overall much better. There aren’t many personal-use systems like this, but systems like AWS take this approach and generally do quite well with it.
> The contrasting approach, where one designs a platform that remains secure even if the owner is allowed to run whatever software they like
There's a lot that one can gripe about Amazon as a company about, but credit where credit is due -- their inversion of responsibility is game-changing.
You see this around the company, back to their "Accept returns without question" days of mail order.
Most critically, this inversion turns customer experience problems (it's the customer's problem) into Amazon problems.
Which turns fixing them into Amazon's responsibility.
Want return rates to go down because the blanket approval is costing the company too much money? Amazon should fix that problem.
Too often companies (coughGoogleMicrosoftMetacough) set up feedback loops where the company is insulated from customer pain... and then everyone is surprised when the company doesn't allocate resources to fix the underlying issue.
If false positive account bans were required to be remediated manually by the same team who owned automated banning, we'd likely see different corporate response.
Even if they somehow were so expensive, that it would no longer scale to their size, that is still not our problem and if anything, a sign that either they need to improve their systems, or simply cannot be as big as they are. Shit happens, scale down, I won't cry for them.
> I understand the sentiment, but.. do you realize how much more expensive that would make all these services?
It wouldn't. For example, before Gmail, email was often free or nearly free (bundled with your internet service), but in most cases, you could talk to a human if you had issues with the service.
What we couldn't do is turn these business models into planetary-scale behemoths that rake in hundreds of billions of dollars in revenue. In essence, you couldn't have Google or Facebook with good customer support. I'm not here to argue that Google or Facebook are a net negative, but the trade-offs here are different from what you describe.
Honestly, it's not our problem. Once a service becomes so vital it cannot be terminated without any meaningful process. My meta developer account is suspended and none of my appeals are responded to . Who can I talk to? Nobody. It's wrong.
I don't think they would be so much more expensive but they would be less profitable for sure and perhaps less "innovative" as a big chunk of the profit will go into regulation stuff.
They should probably be regulated as utilities and broken up into smaller companies, so that it's easier for people to migrate to alternatives when one company does something bad.
It always weird to see how dichotomy of some people saying AI will never be profitable and are doomed to fail and others saying that they are such a essential public service that they are a utility and should be subject to government regulation. Hopefully they are not the same group of people, but I suspect there is a greater overlap that one would expect.
I'm not one of those people but want to point out that there isn't much of a contradiction there. I don't know if hospitals, universities, train tracks, roads, and libraries technically speaking count as utilities but they overall don't seem to be profitable and at the same time are extremely desirable for a society and an economy to have. AI could turn out to be of the same sort.
I've gotten business verification for Microsoft before. The kind you need in order to get certain oauth scopes for their O365 platform.
Do not discount complete, total, utter, profound fucking incompetence as the driving reason behind this.
Getting the business verification was an astounding shitshow. With a registered C corp and everything, massively unclear instructions, UI nestled in a partner site with tons of dead ends. And then even after all the docs, it took another week because -- in an action that nobody could possibly have ever foreseen -- we had two different microsoft accounts due to a cofounder buying ONE LICENSE of O365 for excel and doing domain verification because it suggested it.
Now this is even more alarming! Wireguard's creator has their Microsoft account suspended...
<Tin foil hat on>
Microsoft doesn't want to allow software that would allow the user to shield themselves, either by totally encrypting a drive, or by encrypting their network traffic!
</Tin foil hat on>
> Microsoft doesn't want to allow software that would allow the user to shield themselves
I don't think Microsoft cares (about anything besides making mo' money), but there are plenty of (state) actors that can influence the decision-making at Microsoft when it comes to these issues.
Total enshittification with this pure aluminium shit. The hats don't block government UFO mind control waves and hold their shape nearly as well as the tin ones did. Fucking private equity ruins everything.
Wait, what?! I was sure that the agenda of Big Tinfoil was to generate FUD so that we buy more tinfoil for our hats. Are you implying their agenda goes even deeper?
Have you tried to buy tin foil lately? Big Aluminum has taken over, and just see how far you get soldering the grounding strap to an aluminum foil hat.
But it is NOT necessarily a factual statement that one of the main uses of electromagnetic radiation is for humans to send information over long distances; nor that I first learned about tinfoil hats from some random piece of information that was being broadcast by means of electromagnetic radiation. It's just a vibe.
>I don't think Microsoft cares (about anything else than making money), but there are plenty of (state) actors that can influence the decision-making at Microsoft when it comes to these issues.
Microsoft the corporation may only care about making money, but a lot of very high ranking folks within MS Security aren't just friendly to intelligence agencies, they take genuine pride in helping intelligence agencies. They're the kinds of people who saw nothing wrong or objectionable with PRISM whatsoever, they were just mad they got caught, and that the end user (who they believe had no right to even know about it) found out anyway. The kind of people who openly defend the legitimacy of the FISA court.
This aren't baseless accusations, this comes from first-hand experience interacting with and talking to several of them. Charlie Bell literally kept a CIA mug on a shelf behind him, prominently visible during Teams calls, as if to brag.
Remember - Microsoft was the very first company on the NSA's own internal slide deck depicting a timeline of PRISM collection capabilities by platform, started all the way back in 2007. All companies on that slide may have been compelled to assist with national security letters. Some were just more eager than others to betray the privacy and trust of their own customers and end-users.
I was always convinced that Skype was bought by microsoft so CIA/US intelligence agencies to have listening capabilities.
The first thing Microsoft did after the Skype purchase was making it easier to tap into the calls by removing p2p calling and routing calls using centralized servers.
That's my experience with most computer security folks as well, and tech companies who sell security products. Cloak-and-dagger stuff running 24x7 in their heads.
There are quite a few extremely talented security folks who are more or less the polar opposite, who view people like Edward Snowden and Julian Assange as heroes, the NSA as guilty of treason, as James Clapper as guilty of perjury, even inside of corporations like Microsoft.
The catch is, views like those must be kept to a fairly modest level by the people who hold them. Discussing them with ideologically aligned colleagues may be fine, but for example, when someone makes statements or asks questions with such pro-privacy framing on stage directly to security leadership at internal company conferences, that is a quick way to a severance package not only for the person on stage, but also for dozens of folks in the audience who clapped a little too enthusiastically at the onstage remarks.
>I don't think Microsoft cares (about anything besides making mo' money)
If Microsoft amounts to a sentient entity (i.e. is able to care about things), we have a bigger problem.
If we put the wall of metaphor between us and that interpretation, it still remains likely that "users shielding themselves" is of primary concern to Microsoft's bottom line.
If you use an automated process to disable accounts but then state there is no appeals process available as they stated, then you are not to be trusted to be acting in good faith. Bad actors should be called out and not given the benefit of the doubt.
This phenomenon is so Orwellian with insufficient awareness, it should both be an SNL skit and a John Oliver episode. It's illiberal, neoliberal, corporate bullshit that causes harm to individuals. These companies need to be treated as utilities and the "companies can do whatever they want" arguments must be debunked and defeated because of the pervasive power they hold and immense harm they can cause to individuals without a remedy when they rug pull access without clear cause.
It also reminds me of the case of the entire family who lost all of their payment-linked individual accounts including business data and an academic dissertation because the son allegedly behaved inappropriately with a bot. Collective punishment on top of technofeudal instant banishment.
Where are the people that tried to sell us software signatures as security benefit? The reality is that they are a very specific security problem. In theory and in practice.
When a company makes it impossible to correct their stupidity, it's a malicious act. The behavior speaks loud and clear: "We don't care what damage we do to developers or users. And we don't want to hear about it."
It was probably true at some point, then malicious people learned how to fake stupidity and they outnumber actual stupid people, and they learned how to recruit stupid people to their causes.
No. Embrace, Extend, Extinguish was replaced by the AAA strategy: Acquire, Assimilate, Abandon. They were trying to be more Google-like with that "Abandon" step I think.
They've since moved on to the SSS strategy: Ship, Slip, Slop.
Maybe time for a custom license that would require M$ to sign up for special T&Cs if they want to use this software?
Who cares if it's OSI-approved or not, a line saying "M$, Google, and the like need written permission for every use case" would help to make those leeches honest. Just learn from the JSLint example.
Valkey is better because all of the new development work happens on Valkey, not because of the license. If the actual developer changed the license, that would be a different situation.
That actually is not analogy at all and it makes sense. When a low-paid Uber Eats delivery person just throws the box carelessly and brings damaged dish to the customer, that's a real issue.
In digital services there's no such thing. There's only a damned corporation employing idiots who don't care about community.
Having multiple accounts wouldn't help, as Microsoft could easily suspend all the accounts of everyone associated with the project if any account looks suspicious. The single point of failure is Microsoft.
Any account can sign any (same) piece of software. Of course Microsoft could detect the it's signing a software related to a banned signed and ban the new account. So veracrypt (and wireguard) is stuck.
It's outrageous. MS is simply enforcing some Government crackdown on encryption software that would interfere with backdoors.
60 days, long enough for the US to exploit the vulnerabilities discovered by Claude Mythos, short enough to plausibly be bureaucratic corporate awfulness by Microsoft when all is said and done. Basically freezing you and other security software out of protecting the bad guys they particularly want to get at until after the bad guys get got, then everything goes back to normal and Microsoft says "oops, here, we fixed your access."
The other day I tried to create a Github account and was repeatedly told I am fraudulent. Nothing else. Try again later, it says.
This is the same thing that's happened every time I've tried to have a Microsoft account. I don't think Microsoft wants to have customers who aren't rich.
Maybe some bot signed up using your email and then did bot things on it. I've had that happen a lot over the years. My Microsoft account is still stuck in German because that's the language the bot used when creating the account (to spam X-Box apparently).
I got a 20y old hotmail/live account deleted by Microsoft because a bot tried to reset my password too many times. Considering the magnitude of the targeted attack, MS found the safest way to keep me secure was to wipe my account. That way the attacker could not get into my account.
I had something similar with a 6-letter apple account that has never been compromised but I guess got put on some kind of list, because I had to go through account recovery almost every time I logged in, which wasn't a big deal until I got an iphone. Apple support was completely useless. Random old buried forum post in a stall marked "beware the leopard" mentioned the behavior and suggested changing the account name.
Nothing in the Apple site or phone stuff would even clue the user in to what was happening, much less how to resolve it.
I saw a tweet saying that there's a requirement for verification.
> Effective October 16, 2025, Microsoft will initiate mandatory account verification for all partners in the Windows Hardware Program who have not completed account verification since April 2024.
> Partners who fail to complete Account Verification by the deadline, or who do not meet the requirements, will have their status set to Rejected and will be suspended from the program.
They always just tell me to ask copilot, then they open a case using copilot, and then they tell me to ask copilot again. I said I wanted to prove that the code didn't contain malicious code, and they still told me to ask copilot...
This account has been suspended because the code you submitted contains malware or potential vulnerabilities. If you believe your account was suspended in error and can demonstrate that the code you submitted does not contain malware or vulnerabilities, please follow the below steps, and contact us.
. Go here: http://aka.ms/hardwaresupport
2. Click Contact Us
3. Make sure you are signed in with a user associated with the HDC account in Partner Center
4. Select Ask Copilot to receive email support.
I tried to set up a partner account for driver signing last year (as a business entity) and it already seemed basically impossible. I think they're getting ready to just simply not allow it at all.
This is stupid. If Microsoft wants people to stop writing kernel drivers, that's potentially doable (we just need sufficient user mode driver equivalents...) but not doing that and also shortening the list of who can sign kernel drivers down to some elite group of grandfathered companies and individuals is the worst possible outcome.
But at this point I almost wish they didn't fix it, just to drive home the point harder to users how little they really own their computer and OS anymore.
Not exactly the same situation, but RustDesk has recently been removed from the official WinGet community repository because their automated scans have been blocking updates since v1.4.2 in September 2025.
tl;dr: ESET Antivirus flags RustDesk as a "Potentially Unsafe Application" because it is a remote administration tool, despite not flagging similar commercial products in the same way, and the WinGet Community repo policy is to block anything flagged as such. Since they were unable to update the repo the RustDesk team requested that the older versions be removed to prevent users from unknowingly installing old versions that could potentially be a security issue in the future. Apparently this has been an issue for a lot of applications especially in the VPN and remote control categories.
There is a discussion about how best to handle these sorts of situations where legitimate and desirable applications get flagged as "potentially unsafe" or "potentially unwanted" but so far it's just been a discussion with no actual changes proposed yet.
With these big players who are regularly found supporting people with evil intentions: Don't attribute to incompitence what could be ascribed to malice, nay you must trust the gods of the clouds to keep your secrets for you, all for the low low price of $x.99 a month a seat, you may only cancel your service with an arcaine dance and the sacrifice of your first born!
Thank you for the extra visibility on this issue. I'm in the exact same boat: account suspended, waiting for the 60 days appeal process. Hopefully it will be resolved swiftly!
True, but really even if it gets resolved for them it should basically be a huge warning sign to everybody. Projects like those might get reinstated but it would only be because of how big they are that it would matter. Any person or small or 'undesirable' project would not get the same resolution.
Surprised to see you here. Thanks for all your hard work.
Windows users are in a tough spot, but with the dawn of Copilot, nobody should be surprised. Frankly, those who remain with Windows after this latest betrayal have chosen their fate.
are you making an argument that businesses worldwide somehow are known to make well thought-out, rational, wise decisions that are in best interest for the business and efficiency of running it?
because most managers I know in my professional life go with the vendor that buys them dinner or slips them tickets for box seats.
I think it’s intentional, those encryption (at rest/transit) applications are outside of MS control and you can assume outside of potential backdoors by three letters agencies, bitlocker vs veracrypt? Of course bitlocker is favorable from their perspective.
I wouldn’t be surprised if NSA already had a list of these applications and the strategies on how to cripple them or worse, compromise them.
That’s not how any of this works. There are separate teams within (each division of) Microsoft that could easily pull the plug on your account (or if not the entire account then your account’s access to the specific service or family of services) for any of a myriad purported reasons or alleged ToS violations.
No one is calling an executive meeting to discuss banning an OSS dev’s account.
I have a hard time believing this to be true when for a while now it's always been some automated system that goes completely unchecked and unmonitored. It's not until someone who is wrongfully affected complains on Xitter does anyone notice.
"Currently undergoing some sort of 60 days appeals process, but who knows."
.. and the op said:
"I have tried to contact Microsoft through various channels but I have only received automated replies and bots. I was unable to reach a human."
... which is a roundabout way of saying you did not spend lawyer hours and you did not contact them through channels that they cannot ignore: registered, physical mail, from a lawyer.
I'm sorry for these difficulties, truly, but don't tell me you can't reach a human when you most definitely can reach a human. From my own experience with an organization at least as calloused and indifferent as MS[1], as soon as I sent a real, legal communication I had real live humans lining up to talk to me.
Microsoft hasn't managed to burn down entire towns (But Copilot is probably working on it), so I suppose we do have at least some kind of gauge of callousness to work off of thanks to PG&E. Which was also the company behind that whole slightly famous Erin Brockovich thing, amongst so very many others.
If anybody within Microsoft is able to do something, please contact me -- jason at zx2c4 dot com.