dougthonus wrote:micromonkey wrote:The problem with the current lab leak theory is that its all based on rumors, not documented well and people who have agendas are pushing it. I was open to investigating it but find it totally lacking.
I've not really researched the lab leak theory, because in the end, I don't care if it was a lab leak really. People are doing crazy research in labs and some of it is dangerous, we can either stop doing that research all together but if not these types of things are going to happen.
To put it in a different way, AI could have catastrophically bad outcomes for the human race, but we're still pushing along investigating it as fast as possible. There has been tons of movies like terminator or the matrix based on the idea of AI surpassing people and taking over. This is an extremely likely outcome of advanced AI research eventually and the moment it happens it will be over for the human race if the advanced AI decides it should be (which who knows what it will decide once it's an order of magnitude smarter than us). It won't be able to be stopped yet we're blissfully going down that path anyway. Probably won't happen until after I'm dead, but not too far along after I'm dead (~50-100 years seems like a lock).
In the end, I tend to believe the lab leak theory, the epicenter is like right next door to a lab doing Coronavirus research just on the surface seems like way too big a coincidence to ignore even without any other single piece of evidence. The fact that there was absolutely no transparency as to what happened is a typical MO of China but it basically means you can't reliably believe any data there. If I had to gauge the likelihood of lab leak based on only that one reliable data point knowing all other data points are likely unreliable, I'd say more likely than not. Again, very superficial analysis, but from an Occam's razor standpoint just simply seems to make sense and I just don't think you can trust any other data really when China had so much time to clean it all up (and was actively doing so).
That said, again, we're researching dangerous crap all over the world and we're going to have big problems all over the world because of it. The human race is likely going to kill itself off entirely or create a post modern apocalyptic outcome because we keep coming up with more and more ways to kill everyone. When you think about it, before 1950 there probably was no single way we could even do that. We just didn't have the technology to do so. Now we could trivially do it through a nuclear world war, AI is a likely threat in the future, enhanced BIO/Supervirus weapons are a threat in the future, and degrading the environment to make it unlivable is a threat in the future, and we'll probably come up with more ways.
It's really a race as to whether we do something dumb enough to wipe ourselves out or get off the planet and colonize first.
I will say I agree with your overall point about bioweapons/superweapons in general
No doubt we already live in the era where a human engineered plague is no longer the work of fiction. It definitely is possible.
A state funded bio-terrorist group could probably re-work a MERS to make it far more contagious, they could simply "improve" upon existing proposals. Or re-work some other nasty virus/plague from our past. Less than a decade from now it will probably be possible for terror cells. We'd never know and have little defense. If we thought we could quarantine to avoid mass deaths--I think we have seen how far we are from that. If we thought we had any real contingency plans--we should know we got NADA now.
I think we have risks far far sooner than general AI. Purpose built AI (already here) combined with robots, drones and existing weapons--could already wreak havoc. All you need is wreck-less state actors.
You will get no argument from me that our current defense is woefully out of date and focusing in the wrong areas. Sadly, I would think a shared national disaster would bring sides closer but it's done just the opposite.























