OT: Orwellian future on steriods

User Forum Topic
Submitted by ucodegen on November 16, 2017 - 2:37pm

What was presented by the Terminator series of movies was not completely accurate. The devices coming after humans will not completely look like us. They may be more like something the size of a hummingbird or smaller.

https://www.youtube.com/watch?v=9CO6M2Hs...

Food for thought:

  • Should we run a national defense project to know how to do this - but emphasizing how to defend against. Part of the reasoning being that some rogue nation will decide to develop the tech covertly in spite of any ban. Keep in mind the issue of risks due to classified data leaks.
  • Should we outright ban them to try to prevent them because a lot of what rogue nations get, they get through 'security leaks' and 'security compromise'.
  • Any other options (be realistic)?

    This scenario is not as far fetched as you may think. In many cases it is just the further miniaturization of what we already have. Maybe a little more on my background to emphasize that I am not a tinfoil hat nut.

    Started writing code in HS, freshman year. Was exposed to one of the first portable computers, an IBM 5100 - this would be early '70s. Took summer high school acceleration courses at CalTech. By end of HS, was programming in APL, Basic and Fortran. Near perfect SAT score, two perfect AP test scores - math, physics. Started at UCSD as Physics-Engineering, 3rd year switched to EECE. Some work at SIO - various. From then on almost all DoD - including tracking, guidance, modeling and simulation.

    I'm not trying to toot my own horn, just want to show that I am coming from the aspect of someone who has the knowledge to accomplish much of the 'slaughterbot' tech and I recognize that there are others as capable and knowledgeable. To me, the concept is very chilling and has bothered me for quite a while. I see both the allure and the extreme danger.

    I know that on a ban, most of the truly democratic nations will tend to abide by the ban - but what about North Korea, China and Russia?

    Thoughts?
    There is also a link for those who want to take action towards a ban: http://autonomousweapons.org/

  • Submitted by moneymaker on November 16, 2017 - 3:00pm.

    Right now looks like fake news, but I can see it in the near future.

    Submitted by ucodegen on November 16, 2017 - 3:14pm.

    moneymaker wrote:
    Right now looks like fake news, but I can see it in the near future.

    The video 'demonstration' is not real, as well as the contrived scenario - but a warning of what may come. The estimate for having such devices is now being estimated in the timespan of years not decades. Remember Moore's Law. The video was designed to look like it was a real promotion - watch to the very end.

    Submitted by spdrun on November 16, 2017 - 3:28pm.

    Hunter-seekers from Dune ... hmmm - maybe we'd be better off if North Korea did the EMP thing :)

    Anyway, we should be working on EMP weaponry to fry the brains and sensors of those things.

    Submitted by The-Shoveler on November 16, 2017 - 5:31pm.

    Soo you think NK will not have the ability to develop this on their own?

    (or buy talent?)

    Having the ability to take out "one" Individual in a crowd (reliably) may take a long time but to take out anything that looks human in a given city probably not so long.

    Submitted by ucodegen on November 16, 2017 - 6:57pm.

    The-Shoveler wrote:
    Soo you think NK will not have the ability to develop this on their own?

    (or buy talent?)

    Having the ability to take out "one" Individual in a crowd (reliably) may take a long time but to take out anything that looks human in a given city probably not so long.


    I don't think NK has the ability to develop significant new tech on their own, but they have shown the ability to buy or hire talent - possibly from China, maybe Russia through their respective black markets. I alluded to that in my comments. To pick out one individual in a crowd is already there. The British police cams can do that and Apple has face recognition for their phones (allows ones face to unlock phone). Picking out human forms is already there - Google's driving algorithms can detect and differentiate/id between cars, buses, people riding bikes, bikes on the side, animals, people walking in sidewalks.

    Watch this: emphasis on time index 0:34. Yellow boxes are people walking, red box is person on bike, purple boxes are cars, series of red boxes on ground represent a 'control zone'. It is also identifying stoplights and 'reading' them as well as stop signs.
    https://www.youtube.com/watch?v=MqUbdd7ae54

    So on both counts we are already there. My argument goes along the lines of - lets take the required form 'size' for something with that capability takes up the space of a Predator drone. I feel that I am within reasonable range of that considering that Googles unit runs in a box smaller than a suitcase in a car, Apple's face id is a phone app, the Predator already has propulsion, weapons, comm etc. Apply Moore's law of increasing density to approx required size in implementation. A Predator drone is 55' wide by 27' long - approx 1,485 total horizontal area which can be represented by about 38' on a side. In two years, Moore's law would indicate that it would be 19' on a side, four years 9.5' on a side, 6 years would be 4.75' on a side, 8 years would be 2.375' on a side, 10 years would be 1.8' on a side.

    Now there is one big problem with what I mentioned above. That is that the Predator's guidance, surveillance and weapons control package is about 1.5' by 1.5' by 3', not 27' by 55'. All the rest is the plane including fuel for long range operations. That means we may already be close the the 8 year mark that I mentioned above. As for smaller drones that spy, this is 2011 (reminder it is currently 2017 now)

    http://content.time.com/time/video/playe...

    actual applied device:
    https://www.youtube.com/watch?v=4o7mRg74qcY

    And this was done 2013.

    Submitted by phaster on November 17, 2017 - 5:34pm.

    ucodegen wrote:

    https://www.youtube.com/watch?v=9CO6M2Hs...

    ...This scenario is not as far fetched as you may think. In many cases it is just the further miniaturization of what we already have. Maybe a little more on my background to emphasize that I am not a tinfoil hat nut...

    interesting video presentation,... was waiting for the punch line “making the world a better place!”

    https://vimeo.com/98720197

    perhaps adding that bit of gallows humor might have been just a bit over the top,... anyway given Moore's law, asymmetric “autonomous-information” warfare is inevitable

    Quote:

    ON HYPERWAR

    “...Technologies such as computer vision aided by machine-learning algorithms, artificial intelligence (AI)-powered autonomous decision making, advanced sensors, miniaturized high-powered computing capacity deployed at the ‘edge,’ high-speed networks, offensive and defensive cyber capabilities, and a host of AI-enabled techniques such as autonomous swarming and cognitive analysis of sensor data will be at the heart of this revolution … [which will minimize] human decision making in the vast majority of processes traditionally required to wage war.

    ...What makes this new form of warfare unique is the unparalleled speed enabled by automating decision making and the concurrency of action that become possible by leveraging artificial intelligence and machine cognition. … In military terms, hyperwar may be redefined as a type of conflict where human decision making is almost entirely absent from the observe-orient-decide-act (OODA) loop. As a consequence, the time associated with an OODA cycle will be reduced to near-instantaneous responses. The implications of these developments are many and game changing.”

    https://www.usni.org/magazines/proceedin...

    whats the old proverb, may you live in interesting times

    the pragmatic goal for anyone w/ or w/out a tinfoil hat being,... “may you live thru interesting times”

    Submitted by CA renter on November 18, 2017 - 3:26am.

    Personally, I'm concerned about ALL state actors as well as independent groups and individuals who could use this type of technology in a nefarious way. I've been following these issues for a number of years as a non-tech, but politically active, observer. IMO, we should outright ban this type of technology and spend our resources on defending against it.

    Many of our greatest "tech geniuses" have been warning against AI for a long time. While there are some benefits to AI, we need to balance those against the risks. IMHO, the risks are too great. There are plenty of people in the world who would like to use this technology in order to gain power and control over the world's resources and human populations.

    AI isn't the only problem, though. Drone, surveillance, and weaponized technology (and miniaturization) is a problem in itself. People can be every bit as evil as a machine, so I think we need to work toward the defense against, and the elimination of, most intrusive or destructive technology because there's no way to ensure that it will be used for benevolent purposes.

    What do you think, ucodegen? You're probably better informed than most people regarding this issue. I've always appreciated your input on a variety of topics. It's long been obvious that you know very well what you're talking about when you post; you're not tooting your own horn. Thank you for your contributions to this site.

    Submitted by FlyerInHi on November 19, 2017 - 2:11am.

    Did you guys hear about the fight on the plane? Woman used her sleeping husband’s finger to open his phone and found he was cheating.

    If you use biometrics, the police can forcibly use your body to open devices. But they can’t force you to reveal passwords you may not recall.

    Submitted by ucodegen on November 19, 2017 - 3:01pm.

    phaster wrote:

    interesting video presentation,... was waiting for the punch line “making the world a better place!”

    https://vimeo.com/98720197

    perhaps adding that bit of gallows humor might have been just a bit over the top,... anyway given Moore's law, asymmetric “autonomous-information” warfare is inevitable


    Maybe that was the mission statement for the pseudo company in the video?? I don't know if it would be a bit over the top. Maybe gallows humor is appropriate.
    phaster wrote:

    whats the old proverb, may you live in interesting times

    the pragmatic goal for anyone w/ or w/out a tinfoil hat being,... “may you live thru interesting times”


    Can't agree more. This is one of the things that I think banning won't prevent. The knowledge is already out there. It might be better to think of how to defend against it. I like the link you provided. Did a read on your quote, and bookmarked the link to get back to it for a more thorough read. Here are two more links - one to the effect that putting a ban in place really won't stop development.

    https://spectrum.ieee.org/automaton/robo...

    https://spectrum.ieee.org/automaton/robo...

    Submitted by ucodegen on November 19, 2017 - 3:14pm.

    CA renter wrote:
    Personally, I'm concerned about ALL state actors as well as independent groups and individuals who could use this type of technology in a nefarious way. I've been following these issues for a number of years as a non-tech, but politically active, observer. IMO, we should outright ban this type of technology and spend our resources on defending against it.
    How are we going to prevent it. Sounds like a good idea until you look at what is needed for compliance. You don't need a large infrastructure to develop or build it. We can't control illicit drug manufacture and importation of illicit drugs into the United States. What makes you think an outright ban would prevent development. I do think we need to concentrate on how to defend against them.

    CA renter wrote:

    Many of our greatest "tech geniuses" have been warning against AI for a long time. While there are some benefits to AI, we need to balance those against the risks. IMHO, the risks are too great. There are plenty of people in the world who would like to use this technology in order to gain power and control over the world's resources and human populations.

    I wonder if these people trying to create AI really think that AI would continue to work for them - Part of the acronym is 'Intelligence'. Anyone who had dealt with kids growing up realizes that they go through a prolonged rebellious stage as they find and define their own identity. Imagine that in an AI with weapons. How do we put a conscience in an AI?
    CA renter wrote:

    AI isn't the only problem, though. Drone, surveillance, and weaponized technology (and miniaturization) is a problem in itself. People can be every bit as evil as a machine, so I think we need to work toward the defense against, and the elimination of, most intrusive or destructive technology because there's no way to ensure that it will be used for benevolent purposes.

    What do you think, ucodegen? You're probably better informed than most people regarding this issue. I've always appreciated your input on a variety of topics. It's long been obvious that you know very well what you're talking about when you post; you're not tooting your own horn. Thank you for your contributions to this site.


    I don't think we can prevent, but we do need to look at defense. The ability to create this so called 'tool' is too easy. I also think it is a manner of time, and not much time. That is why I brought up our problems with illicit drugs and the lack of our ability to control it despite the lives it takes (both in use, manufacture and transporting). That is also why I brought up North Korea who has managed to get Nukes despite bans and attempts at bribing or cajoling them to do otherwise.

    A thought on defense:

  • EMPs (Electromagnetic Pulse weapons), while they can take out the drone, they may also take out a lot of other infrastructure. One of the issues with sensitivity to EMP has to deal with the length of wire within the targeted device. The shorter the wire, the more power you would need to use to generate a damaging 'over-voltage' in the target device. There are also ways to shield a device from EMP (to some extent). EMPs can be directed like an RF signal, however like any RF signal, they can have 'side lobes' which could create unintended damage. Because an EMP is a pulse, it comprises many frequencies - which could cause multiple side lobes from the emitter - each at a different frequency. Phased array technique might work, timing multiple smaller pulses to arrive in the target area simultaneously - protecting areas near the pulse emitters because each pulse emitter would be producing a weaker signal. At this point I may be getting close to 'sensitive' areas - so I stop.
  • Lasers - useful for single, and resistant to anti-sniper movements noted shown video. Time of arrival at target is near instantaneous. Will need to be careful because lasers continue in a straight line... and continue.. and continue. If you miss - where does the beam stop? If you burn through - where does it stop?

    Thanks for your appreciation. I do try to keep things relevant. I put out my background because there are some posters on the board that seem to prefer to troll, and I wanted to keep things on the thoughtful level. My background also might demonstrate why I feel this is a disturbing issue - giving some credence to my words.

  • Submitted by livinincali on November 22, 2017 - 12:36pm.

    The current state of battery technology makes the reality of such a scenario much less likely than being illustrated here. The energy density required to fly a small device other a fairly large range just isn't there. A tiny throw away drone with a small explosive, targeting and maneuvering hardware probably has a range of a mile or 2 unless it's designed as a glider that's being dropped from attitude. In order to execute an attack of this type you have to be in close proximity of the target. I.e. you'd have to fly a plane and drop a bomb of these over the target. Or find some other way of getting them close to the target before activating them. A Nuke would still be far more effective at just pure large scale damage unless of course you're looking to kill the people while preserving the infrastructure.

    Submitted by ucodegen on November 24, 2017 - 5:07pm.

    livinincali wrote:
    The current state of battery technology makes the reality of such a scenario much less likely than being illustrated here. The energy density required to fly a small device other a fairly large range just isn't there. A tiny throw away drone with a small explosive, targeting and maneuvering hardware probably has a range of a mile or 2 unless it's designed as a glider that's being dropped from attitude. In order to execute an attack of this type you have to be in close proximity of the target. I.e. you'd have to fly a plane and drop a bomb of these over the target. Or find some other way of getting them close to the target before activating them. A Nuke would still be far more effective at just pure large scale damage unless of course you're looking to kill the people while preserving the infrastructure.

    A nuke is not selective. If you noticed what was covered in the created scenario - the drones were selecting their targets, taking out some and leaving others. The drones were doing selective 'culling' based upon some unmentioned criteria.

    They did cover the distance issue. Note on the video, smaller drone being carried by larger. The larger drones were also shown as being 'breaching' drones after they released the smaller drones. Range is an issue with both fueled planes as well as battery powered. Just compare the range of the following; RC gas powered plane, small personal plane, luxury twin engine, Boeing 737, Boeing 747. Scaling fuel capacity is by 3rd order - double the size give 8 times the fuel. Batteries will always have less energy capacity than fuel, that is due to the nature of how the energy is stored. Batteries are simple electron exchanges, fuel is complete chemical rearrangement.

    NOTE: The scenario shown was not a nation state vs nation state. It was factions within a nation state or the nation state against various other groups within the nation state.

    It could also be seen as a terrorist deployment within a nation state.

    Submitted by phaster on November 26, 2017 - 11:28am.

    ucodegen wrote:

    phaster wrote:

    whats the old proverb, may you live in interesting times

    the pragmatic goal for anyone w/ or w/out a tinfoil hat being,... “may you live thru interesting times”


    Can't agree more. This is one of the things that I think banning won't prevent. The knowledge is already out there. It might be better to think of how to defend against it. I like the link you provided. Did a read on your quote, and bookmarked the link to get back to it for a more thorough read. Here are two more links - one to the effect that putting a ban in place really won't stop development.

    https://spectrum.ieee.org/automaton/robo...

    https://spectrum.ieee.org/automaton/robo...

    The rest of the article is mostly about two near future military scenarios, I just included the tech stuff that seemed applicable to topic at hand!

    FWIW if you want a related scary BOT SoCal specific scenario consider the nuke waste @ san onofre being breached,...

    (SEE THE FOLLOWING LINK) of a PDF was sent to me by a neighbor of sorts who has been very active trying to stop unsafe nuke power (especially after what happened in japan),... as a matter of fact, he organized a conference here in San Diego and played chauffeur to the mayor of fukushima who was here to discuss the "issue"

    https://drive.google.com/file/d/0B66GMOh...

    as related to the OP a threat might come from a swarm of off the shelf commercial drones (or perhaps something homebuilt)

    http://www.interestingprojects.com/cruis...

    w/ improvised frag devices that punches holes in the 5/8" thick containment vessels, and due to its geographic location this creates chock point on a major transportation corridor,...

    OR considering Murphy's law,... in the long run the waste (in thin walled tin cans so to speak) the end result of a spent nuclear reaction which is far more dangerous than the input fuel or said another way “bad” byproducts as a result of power generation produce decay elements which are not as easily controlled as the purified uranium fuel (for a reactor),… just leaks out!!!

    thus far it seems more like a real life halloween horror story WRT management of this and other stuff because as long as stupid $hit like this goes on…

    https://piggington.com/commentary_why_fu...

    https://piggington.com/nobel_economics#c...

    we are on a down ward trend!... basically as I see things the root cause/solution is money (which right now keeps the system working, but eventually as I see things it will blow things apart because there are not enough reserve$ to address issue that arise when shit happens)

    Quote:

    Pension Math: Public Pension Spending and Service Crowd Out in California, 2003-2030

    As budgets are squeezed, what are state and local governments cutting? Core services, including higher education, social services, public assistance, welfare, recreation and libraries, health, public works, and in some cases, public safety.

    https://siepr.stanford.edu/research/publ...

    Comment viewing options

    Select your preferred way to display the comments and click "Save settings" to activate your changes.