Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

Unsupervised Learning: No. 185

Unsupervised Learning: No. 185

FromUnsupervised Learning


Unsupervised Learning: No. 185

FromUnsupervised Learning

ratings:
Length:
22 minutes
Released:
Jul 8, 2019
Format:
Podcast episode

Description

The Telegraph has found strong links between Huawei employees and Chinese intelligence agencies. The Huawei counter was that this was extremely common among telecom companies, and that it wasn't a big deal. The counter to that counter was, basically, "Well, then why did you try to hide it?" /gg MoreThe NPM security team caught a malicious package designed to steal cryptocurrency. A lot of these packages work by uploading something useful, waiting until it's used by lots of people, and then updating it to have the malicious payload. My buddy Andre Eleuterio did the IR on the situation there at NPM, and said they're constantly improving their ability to detect these kinds of attacks. Luckily NPM's security team had the talent and tooling to detect such a thing, but think of how many similar companies aren't so equipped. I think any team that's part of a supply chain should be thinking about this type of attack very seriously. MoreFederal agents are mining state DMV photos to feed their facial recognition systems, and they're doing it without proper authorizations or consent. To me this has always been inevitable because—as Benedict Evans pointed out—it's a natural extension of what humans already do. You already have wanted posters. You already have known suspects lists. And it's already ok for any citizen or any cop to see any person on that list and report them. In fact it's not just possible, it's encouraged. So the only thing happening here is that process is becoming a whole lot more aware (through more sensors), and therefore more effective. Of course, any broken algorithms that identify the wrong people, or automatically single out groups of people without actual matches, those issues need to be snuffed out for sure. But we can't expect society to not use superior machine alternatives to existing human  processes, such as identifying suspects in public. That just isn't realistic. Our role as security people should be making sure these systems are as accurate as possible, with as little bias as possible, by the best possible people. In other words, we should spend our cycles improving reality, not trying to stop it from happening. MoreSupport the show: https://danielmiessler.com/support/See omnystudio.com/listener for privacy information.
Released:
Jul 8, 2019
Format:
Podcast episode

Titles in the series (100)

Thinking about the intersection of security, technology, and society—and what might be coming next. Every Monday morning you get a curated 15-30 minute summary of the week's most important stories and why they matter. Plus regular essays and interviews that explore a single topic.