Big News: FounderDating is joining OneVest to build the largest community for entrepreneurs. Details here
Latest Notifications
You have no recent recommendations.
Name
Title
 
MiniBio
FOLLOW
Title
 Followers
FOLLOW TOPIC

Question goes here

1,300 Followers

  • Name
    Entrepreneur
  • Name
    Entrepreneur
  • Name
    Entrepreneur
  • Name
    Entrepreneur
  • Name
    Entrepreneur
  • Name
    Entrepreneur
  • Name
    Entrepreneur
  • Name
    Entrepreneur

Should we be concerned with recent advancements in artificial intelligence?

http://www.nytimes.com/2015/05/26/science/darpa-robotics-challenge-terminator.html?_r=3


While self-aware robots are not a current threat, many big names in tech are worried about the possible introduction of these robots into society. Artificial intelligence is still in the very early stages of development, but is this something we should even be dedicating resources to? I'm not quite sure of what my personal opinion is on the matter, but I'm interested to see what you all think of this debate.


15 Replies

Lane Campbell
0
0
Lane Campbell Advisor
Lifelong Entrepreneur
If Elon Musk and Bill Gates are speaking publicly about it then yes we should be concerned.
Kevin Goldstein
2
0
Kevin Goldstein Advisor
IT
I feel it's the same as any large paradigm shifting development the human race has been through in it's history (examples: Plato argued against literacy, Industrial revolutionists feared there would be no more manufacturing jobs, and opponents of the television said it would turn the human race into mindless drones). The only consistent thread across these shifts and changes is that (a) yes, it changed the way we viewed and interacted with the world, and (b) It was never how we expected/predicted it.

So, is it something we should be concerned about and fear? no... Is it something that will impact how we interact with the world around us? yes... but probably not in the way we expect it to.

Lastly, large scale changes are usually only dangerous to those who are unwilling to learn and adapt.

Stan Podolski
0
2
Stan Podolski Entrepreneur • Advisor
CEO at Nimble Aircraft.
There is no AI on a horizon and there is no singularity coming either.

You can be concerned if you want, but otherwise, all these robots are nothing more than a bit smarter automates.

They can do what was programmed, they can not evolve
Max Goff
2
0
Max Goff Entrepreneur • Advisor
Big Data Engineering, Data Science, Marketing, Investing
Actually, genetic algorithms and genetic programming are but two disciplines in AI that do give rise to evolving 'intelligence,' the inner-works of which can quickly become inscrutable. This is to say that, yes, machines can do a lot more than they are programmed to do, and sorry, you cannot really know nor fully understand how they arrived at their conclusions, even though those conclusions may be superior to human cognitive processing.

The question is can machines be built with general purpose human competitive intelligence? The Turing Test is an excellent rubric in that regard, and to date it has not been conquered.

I my view, the advent of general purpose human competitive AI is not in itself an existential threat to humanity. But it might be more of an indirect threat. Clearly if AI is able to replace humans in all avocations. and do the work cheaper and better, then no human job is safe. How we cope with the social and economic consequences will determine if we survive -- not because of a Matrix like war with the machines, but more like a new serfdom imposed by those who own vs. those who do not. The have/have not pattern can easily be magnified and rigidly enforced if we do not figure out a better model. AI just might be our downfall in that regard. But probably not because of evil machine consciousness.
Reuven Granot
0
0
Reuven Granot Entrepreneur • Advisor
Corporate Strategic and Scientific Officer at Perlis Ltd

As our expertise is in the field of robotics and AI, I would like to state that I have nothing to contradict the different opinions displayed in Science by JOHN MARKOFF on MAY 25, 2015 (the above linked article). However, in my opinion the question is if in principle robots may or not be manufactured with superior intelligence than we as humans have. At least theoretically, robots deigned using Multi Agent Systems are an assembly of simple parts and their intelligence may develop following a similar procedure evolution does. Just much faster. We are still very far from robots, which are controlled safely in order to replace combat soldiers. We are even farther from robots, which may control their development to frighten human civilization. But it is not impossible...
Karl Schulmeisters
0
0
Karl Schulmeisters Entrepreneur
CTO ClearRoadmap

"AI' is hardly in "the very early stages of development". So called "big data" is all about AI. There is a profound shift in social economy coming.. and coming very fast. Which is why Musk and Gates are talking about it. Here are some relevant texts:

Who Owns the Future

The Second Machine Age

Race Against the Machine

David Henderson
1
0
David Henderson Entrepreneur
Technology Coordinator at Southwest Arkansas Education Cooperative
"Should we be concerned?" That, of course, depends on just what one's concern actually is. For example, AI is already used to write many online articles for several big name publishers. The output has often been said to come pretty danged close to a human writer's. Should we be concerned? If so, to what degree? As it turns out, AI writing articles only works for certain kinds of articles, say baseball games. There are definitive stats and generally accepted list of terms and colloquialisms, etc. So, feed all of that into a system and it doesn't take much, really, to kick out articles that appear to have been written by humans. Again, where/how much is the concern?

We must assume (lest we be danged fools) that Asimov's rules for robots (and by extension AI) will not apply in the future. There will undoubtedly be those whose sole purpose is to create AI with the intent of doing harm to humans, intentionally or not. Take, for example, morality tests. AI will not have morals, so the decision it makes will be based on a set of rules either programmed into it or via its own developed system it acquired as it "learned" about humans and "living" on this planet. Which track in the "Trolley Test" would the robot choose? Which one would you choose? Are they indistinguishable? Is that the part that scares people?

I think that is what scares people: AI will face the same scenarios as humans and will make the same choices humans make - or could make. We are guided by multiple sets of rules - societal, internal, parental, spiritual (for some), and so on. So, what if we design AI such that it does not adhere to a certain set of programmed black-and-white rules, but rather it lives in the same gray world we do? That is what scares most people, I would venture. The extension of that, then, is whether or not humans are even "needed" on the planet.

My argument with colleagues has often been that AI cannot be as irrational as humans. We do things based not only on sets of rules running around in our heads, but also by emotion or seemingly random desires. I could get up right now, go to the kitchen and grab a bag of chips and a soft drink, knowing my family will be eating dinner within 30 minutes. It makes no sense and would actually be detrimental in a variety of possible ways (health, make wife upset that I ate and am not hungry, make myself sick if I ate the chips and then ate a full meal, etc). Will AI ever act so irrationally? Would AI ever get to the point of suicidal thoughts and actions? What about last moment changes in our behavior? Everything a particular person thinks, feels, etc has led that person to take his own life, but at the last moment, they opt not to. No logical reasoning, no "pros and cons" list. They just don't do it. Will AI ever be in a similar situation? And if so, what actions would the AI take?

Should we be concerned? That depends.

Ming Tsui
0
0
Ming Tsui Entrepreneur
HabitatForAll.org

We should be concerned about human evil since robots are just machines created by humans.

That human evil element is one we should be concerned with and how to deal with those who

wants to create evil robots that will harm humanity.

Craig Walmsley
0
0
Craig Walmsley Entrepreneur
MD @ Progenit. Founder @ rtobjects. Locke Scholar.
Nope.

Because we really don't have a proper grip on what "Intelligence" amounts to, and therefore only a very vague notion of what it might take to create it.

So I wrote an article on roughly where we are now, and what it might take to actually create "intelligence".
Brendan Gowing
0
0
Brendan Gowing Advisor
CTO at CENTURY Tech
No. It's media hype.


"I'd like to think that this is rock bottom. Journalists can't possibly be any
more clueless, or callously traffic-baiting, when it comes to robots and AI."
Join FounderDating to participate in the discussion
Nothing gets posted to LinkedIn and your information will not be shared.

Just a few more details please.

DO: Start a discussion, share a resource, or ask a question related to entrepreneurship.
DON'T: Post about prohibited topics such as recruiting, cofounder wanted, check out my product
or feedback on the FD site (you can send this to us directly info@founderdating.com).
See the Community Code of Conduct for more details.

Title

Give your question or discussion topic a great title, make it catchy and succinct.

Details

Make sure what you're about to say is specific and relevant - you'll get better responses.

Topics

Tag your discussion so you get more relevant responses.

Question goes here

1,300 Followers

  • Name
    Details
  • Name
    Details
  • Name
    Details
  • Name
    Details
  • Name
    Details
  • Name
    Details
  • Name
    Details
  • Name
    Details
Know someone who should answer this question? Enter their email below
Stay current and follow these discussion topics?