Meine Datenschutz und Privatsphäre Übersicht 2025, für Jedermann
Teilen erbeten !
als PDF Datei:
https://cryptpad.digitalcourage.de/file/#/2/file/kRgZ+fsPATHElnUKYE8ziTgT/
#DSGVO #TDDDG ( #unplugtrump )
#Datenschutz #Privatsphäre #sicherheit #Verschlüsselung
#encryption #WEtell #SoloKey #NitroKey #Email #Cybersecurity #Pixelfed #Massenűberwachung #Leta
#Google #Metadaten #WhatsApp #Threema #Cryptpad #Signal
#Hateaid #Cyberstalking #Messenger #Browser #Youtube #NewPipe #Chatkontrolle #nichtszuverbergen #ÜberwachungsKapitalismus #Microsoft #Apple #Windows10 #Linux #Matrix #Mastodon #Friendica #Fediverse #Mastodir #Loops #2FA #Ransomware #Foss #VeraCrypt #HateAid #Coreboot #Volksverpetzer #Netzpolitik #OpenAndroidInstaller
#Digitalisierung #FragdenStaat #Shiftphone #OpenSource #GrapheneOS #CCC #Mail #Mullvad #PGP #GnuPG #DNS #Gaming #linuxgaming #Lutris #Protondb #eOS #Enshittification
#Bloatware #TPM #Murena #LiberaPay #GnuTaler #Taler #PreppingforFuture
#FediLZ #BlueLZ #InstaLZ #ThreatModel
#FLOSS #UEFI #Medienkompetenz
Lastly, there's the training data. I work for #AWS (so these are strictly my personal opinions). We are opinionated about the platform. We think that there are things you should do and things you shouldn't. If you have deep knowledge of anything (Microsoft, Google, NodeJS, SAP, whatever) you will have informed opinions.
The threat models that I have seen, that use general purpose models like Claude Sonnet, include advice that I think is stupid because I am opinionated about the platform. There's training data about AWS in the model that was authored by not-AWS. And there's training data in the model that was authored by AWS. The former massively outweighs the latter in a general-purpose, trained-on-the-Internet model.
So internal users (who are expected to do things the AWS way) are getting threats that (a) don't match our way of working, and (b) they can't mitigate anyway. Like I saw an AI-generated threat of brute-forcing a cognito token. While the possiblity of that happening (much like buying a winning lottery ticket) is non-zero, that is not a threat that a software developer can mitigate. There's nothing you can do in your application stack to prevent, detect, or respond to that. You're accepting that risk, like it or not, and I think we're wasting brain cells and disk sectors thinking about it and writing it down.
The other one I hate is when it tells you to encrypt your data at rest in S3. Try not to. There's no action for you to take. The thing you control is which key does it and who can use that key.
So if you have an area of expertise, the majority of the training data in any consumer model is worse than your knowledge. It is going to generate threats and risks that will irritate you.
4/fin
Threat models evolve over time, the same as your software does. Nobody is building a save/load feature into their AI powered threat model. Getting deterministic output from consumer-grade LLMs is not a given. So even if you DO create save/reload capability, it's imperfect.
All the tools I've seen start every session from a blank sheet of paper. So If you're revisiting an app that you threat modeled before, because you want to update your model, you're going to start from scratch.
3/n
Related to this, nobody seems to account for the fact that LLMs bullshit sometimes. If you pin someone down and say "the user of your AI-powered threat modeller: do they know how to do a threat model without AI?" Many people will say "yes." Because to say "no" is to admit that the people will be blindly following LLM output that might be total bullshit.
The goal, however, of many of these systems is to make threat modeling more accessible to people who don't know how to do it. To do that, though, you'd have to be more skeptical about your user, and spend some time educating them. Otherwise, they leave the process no smarter than they began.
Honestly, I think a lot of people think the threat model is going to be done entirely by the AI and they want to build a system where the human just consumes and uses it.
2/n
I have seen a lot of efforts to use an #LLM to create a #ThreatModel. I have some insights.
Attempts at #AI #ThreatModeling tend to do 3 things wrong:
1/n
but seriously would #Crowdstrike's lawyers come after me if i publish findings from a public threat model where i might write up findings for airlines and hotels for lacking vendor diversity and failing open on connected systems (e.g. you could book and pay for hotels that were offline, but Bookings.com and some others didn't give a fuck and sent people to hotels not expecting them for days?)
another finding for not having a plan or procedure. etc.
The #encryption topic in #InstantMesaging is popular again recently. As usual there's a lot of misunderstanding and little discussion of a #ThreatModel when giving recommendations.
If the private key is backed up with Apple or Google from your phone, then your messages may as well not be encrypted I've again seen this indirectly with contacts changing phones and their keys are the same as on their old device. Due to automatic backups I guess.
Doesn't matter if it's #WhatsApp, #Signal or #XMPP
Some of my colleagues at #AWS have created an open-source serverless #AI assisted #threatmodel solution. You upload architecture diagrams to it, and it uses Claude Sonnet via Amazon Bedrock to analyze it.
I'm not too impressed with the threats it comes up with. But I am very impressed with the amount of typing it saves. Given nothing more than a picture and about 2 minutes of computation, it spits out a very good list of what is depicted in the diagram and the flows between them. To the extent that the diagram is accurate/well-labeled, this solution seems to do a very good job writing out what is depicted.
I deployed this "Threat Designer" app. Then I took the architecture image from this blog post and dropped that picture into it. The image analysis produced some of the list of things you see attached.
This is a specialized, context-aware kind of OCR. I was impressed at boundaries, flows, and assets pulled from a graphic. Could save a lot of typing time. I was not impressed with the threats it identifies. Having said that, it did identify a handful of things I hadn't thought of before, like EventBridge event injection. But the majority of the threats are low value.
I suspect this app is not cheap to run. So caveat deployor.
#cloud #cloudsecurity #appsec #threatmodeling
> When doing threat modeling from here on out, it is now unfortunately important to consider the question "Am I a moron?"
derisking morons is something i do for a living if anyone is hiring for that. i also turn software engineers into security and privacy resources. triple multiplier right here
#infosec #threatModel #morons
https://apple.news/Ay6s_pSlzRrGicmqMVSr65w
i have debilitating #imposterSyndrome 25y experience in #security, but i know for a fact that i am unusually good at facilitating a #threatmodel. you have to get people to trust you enough to tell you things they don't feel great about or would do differently after we meet but that's the thing— together we create remediation plans that let people do their best work & they weave security and privacy into their work and when you meet again you can see how much better things are, it's parade time
it's lucky for some team out there that i find few things are as satisfying as transmogrifying a team of 3 into a team of 9. or 90 into 270.
even i know that's good math! they start spotting problems before they get in front of me for their second and third #threatmodel.
i have experience in managed services, vuln management, IR, forensics, cloud architectures, saas vendors, HPC, docsis/fiber/firewalls/ids/ips/MFA/u2f/pki
Fediverse. I need your magic. Please tell me your most amusing and wtf #ThreatModel fails.
Redundant systems is not waste or inefficiency. It is protection from threats known and unknown. We are now seeing this on a national and global scale.
#threatModel #infoSec
#zeroTrust?
COVID News Pandemic LongCOVID