simon_brooke,
@simon_brooke@mastodon.scot avatar

@Eceni 'Artificial Intelligence' research used to be my day job, although I'm not really up to speed with current work.

I do believe that Artificial General Intelligence is possible in the very long term; I believe that Alan Turing's paper On Computable Numbers proves this.

But I also believe we're a VERY long way from it, and that #StochasticParrots will not prove a fruitful line of approach.

Semantic models are necessary!

https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf

Eceni,
@Eceni@mastodon.scot avatar

deleted_by_author

  • Loading...
  • simon_brooke,
    @simon_brooke@mastodon.scot avatar

    @Eceni I'm sorry to hear things are not well with you.

    I think I probably did listen to 159, but don't recall the detail now; I shall listen today and get back to you.

    simon_brooke,
    @simon_brooke@mastodon.scot avatar

    @Eceni ah, you meant episode 159 of Bankless; no, I hadn't listened to that before; I have now. In brief:

    I agree with Yudkowski that it is in theory possible to build an intelligent device which is better at building intelligent devices than we are.

    I agree also that within limits I can't yet quantify, that leads to a potentially runaway situation where such devices might further improve themselves fairly rapidly. >>>

    simon_brooke,
    @simon_brooke@mastodon.scot avatar

    @Eceni I don't see the inferential leap from there to 'and such a device will automatically be hostile to us'; on the contrary, I cannot see any reason why it should be. It seems to me that that assumption is atavistic.

    We have compulsions to survive, to compete and to reproduce, but these are explicitly not intelligent compulsions; they're parts of essentially pre-conscious evolutionary mechanisms. A device would not have such compulsions. >>>

    simon_brooke,
    @simon_brooke@mastodon.scot avatar

    @Eceni also, would such a device be conscious, in the sense of having its own independent identity, desires and objectives? My sense is that consciousness is an emergent property of a thing with ability to infer about itself, and that consequently it possibly would. But I don't know this, and I don't know any convincing account of how that might happen. We've no evidence that it has happened with any device we've built yet. >>>

    simon_brooke,
    @simon_brooke@mastodon.scot avatar

    @Eceni whereas, actually, we do see consciousness and theory of mind – the ability to perceive and reason about the self and others – in many species of animal that we don't particularly consider as 'highly intelligent'.

    So perhaps consciousness is not simply an emergent property of intelligence, but something else entirely. In which case we may be able to build machines with great intelligence but no consciousness, in which case the issue of hostility is moot. >>>

    Eceni,
    @Eceni@mastodon.scot avatar

    deleted_by_author

  • Loading...
  • Eceni,
    @Eceni@mastodon.scot avatar

    @simon_brooke /when that's not what we need to do.

    at the point when anything can design and build its own successor then we have entirely lost control.

    and my thought experiment is 'what happens then?'

    and I don't know

    but what I hear is a lot of people extrapolating what they'd like. Which isn't all that useful...

    simon_brooke,
    @simon_brooke@mastodon.scot avatar

    @Eceni 'we' aren't in control anyway. Those who are in any real sense 'in control' are a very small group of ultra-rich or otherwise ultra-powerful people, very few of whom got where they are by ethical means.

    Would a deeply unethical super-intelligence be worse in any meaningful sense than our present deeply unethical elite? >>>

    simon_brooke,
    @simon_brooke@mastodon.scot avatar

    @Eceni There's a very imperialist/competitivist mentality which holds that 'we' must be at the top of the tree (on any given scale); And I think that's where being Scots helps. Historically, it's rarely been comfortable for Scotland living alongside a much more powerful, and often suspicious or hostile, neighbour, but we've survived and we're not doing badly.

    I see no reason why humanity could not live alongside a much more powerful AI neighbour equally well. >>>

    simon_brooke,
    @simon_brooke@mastodon.scot avatar

    @Eceni Ultimately, unless we destroy the planet (which we're well on the way to doing), we're going to be out-evolved by something. That's inevitable. The trick is to learn to live gracefully alongside that whatever-it-is.

    I agree with Yudkowski that, provided an advanced technical civilisation survives for a couple of hundred years, that 'something' is likely to be a device rather than an organism, but I absolutely don't see why that should be a threat. >>>

    simon_brooke,
    @simon_brooke@mastodon.scot avatar

    @Eceni Addendum: this week's Questionable Content strips are germane to this conversation:

    https://www.questionablecontent.net/view.php?comic=5077

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • ngwrru68w68
  • modclub
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • GTA5RPClips
  • JUstTest
  • ethstaker
  • normalnudes
  • tester
  • osvaldo12
  • everett
  • cubers
  • tacticalgear
  • anitta
  • provamag3
  • Leos
  • cisconetworking
  • megavids
  • lostlight
  • All magazines