| TITLE | AUTHOR | DATE | SLIDES | PRESENTATION |
| Why Tool AIs want to be Agent AIs | Gwern Branwen | 31-05-2017 | ||
| A Map: AGI Failure Modes and Levels | Alexey Turchin | 24-05-2017 | https://www.dropbox.com/s/to6qowvhh14wfut/AGI_Failure_modes.pdf?dl=0 | https://youtu.be/kBTNrprdKiU |
| Neuralink and the Brain’s Magical Future | Tim Urban | 17-05-2017 | https://www.dropbox.com/s/e00gsu629zkzl4b/Neuralink.pdf?dl=0 | https://youtu.be/9NpNzlCptJI |
| The Myth of Superhuman AI | Kevin Kelly | 10-05-2017 | https://www.dropbox.com/s/00cnhpyndlo4jru/The_Myth_of_a_Superhuman_AI.pdf?dl=0 | https://youtu.be/WLSOmVXweSs |
| Merging our brains with machines won’t stop the rise of the robots | Michael Milford | 03-05-2017 | https://www.dropbox.com/s/og3pn5o7ofi101e/Humans_Merging_with_AI.pdf?dl=0 | https://youtu.be/Rgm6xMt54VA |
| Building Safe AI | Andrew Trask | 26-04-2017 | https://www.dropbox.com/s/3fnx251f9oiga8p/Building_Safe_AI.pdf?dl=0 | https://youtu.be/Ys-U-4vjRjw |
| AGI Safety Solutions Map | Alexey Turchin | 19-04-2017 | https://www.dropbox.com/s/ldyb7a32nd2089k/AGI_Safety_Solutions_Map.pdf?dl=0 | https://youtu.be/ZNSfUiXZwz0 |
| Strong AI Isn’t Here Yet | Sarah Constantin | 12-04-2017 | https://www.dropbox.com/s/297amvxrl58wgil/Strong_AI_Isnt_Here_Yet.pdf?dl=0 | https://youtu.be/GpuQlJ3IHBM |
| Robotics: Ethics of artificial intelligence | Stuart Russell et al. | 05-04-2017 | https://www.dropbox.com/s/8t5o990d1hf7ew6/Robotics_Ethics_of_artificial_intelligence.pdf?dl=0 | https://youtu.be/z_WhxqCWJ4s |
| Using machine learning to address AI risk | Jessica Taylor | 29-03-2017 | https://www.dropbox.com/s/52k4u10f95c6fvb/Using_Machine_Learning.pdf?dl=0 | https://youtu.be/vXNi4L5PH0A |
| Racing to the Precipice: a Model of Artificial Intelligence Development | Armstrong et al. | 22-03-2017 | https://www.dropbox.com/s/2zybpfb667vy9tl/Racing_To_The_Precipice.pdf?dl=0 | |
| Politics is Upstream of AI | Raymond Brannen | 15-03-2017 | https://www.dropbox.com/s/kvcyf4kwmqmlufx/Politics_Is_Upstreams_of_AI.pdf?dl=0 | |
| Coherent Extrapolated Volition | Eliezer Yudkowsky | 08-03-2017 | https://www.dropbox.com/s/2jldifzkpc82rmk/Coherent_Extrapolated_Volition.pdf?dl=0 | |
| --Cancelled due to illness-- | 01-03-2017 | |||
| Towards Interactive Inverse Reinforcement Learning | Armstrong, Leike | 22-02-2017 | https://www.dropbox.com/s/ouom3qzx8aofulv/Towards_Interactive_Inverse_Reinforcement_Learning_.pdf?dl=0 | |
| Notes from the Asilomar Conference on Beneficial AI | Scott Alexander | 15-02-2017 | https://www.dropbox.com/s/4ohpo4fpewwdz7q/Notes_from_the_Asilomar_Conference_on_Beneficial_AI.pdf?dl=0 | |
| My current take on the Paul-MIRI disagreement on alignability of messy AI | Jessica Taylor | 08-02-2017 | https://www.dropbox.com/s/9jtu8njaloxucrv/My_Current_take_on_the_Paul_MIRI_disagreement.pdf?dl=0 | |
| How feasible is the rapid development of Artificial Superintelligence? | Kaj Sotala | 01-02-2017 | https://www.dropbox.com/s/5u79rex6czszt23/How_Feasible_is_the_Rapid_Development_of_Artificial_Superintelligence.pdf?dl=0 | |
| Response to Cegłowski on superintelligence | Matthew Graves | 25-01-2017 | https://www.dropbox.com/s/bzlw8mc7k1fs0ox/Response_to_Ceglowski.pdf?dl=0 | |
| Disjunctive AI scenarios: Individual or collective takeoff? | Kaj Sotala | 18-01-2017 | https://www.dropbox.com/s/sdsm2mpaiq892o3/Individual_or_collective_takeoff.pdf?dl=0 | |
| Policy Desiderata in the Development of Machine Superintelligence | Nick Bostrom | 11-01-2017 | https://www.dropbox.com/s/jt6w0fzli5b0vg1/Policy%20Desiderata.pdf?dl=0 | |
| Concrete Problems in AI Safety | Dario Amodei et al. | 04-01-2017 | https://www.dropbox.com/s/wthme4pnhlipz2q/Concrete.pdf?dl=0 | |
| --Holiday-- | 28-12-2016 | |||
| A Wager on the Turing Test: Why I Think I Will Win | Ray Kurzweil | 21-12-2016 | https://www.dropbox.com/s/iurbqzyaq9tt69f/Kurzweil.pdf?dl=0 | |
| Responses to Catastrophic AGI Risk: A Survey | Sotala, Yampolskiy | 14-12-2016 | https://www.dropbox.com/s/iywy8znxx8yn1xt/Responses%20to%20AI.pdf?dl=0 | |
| Discussion of 'Superintelligence: Paths, Dangers, Strategies' | Neil Lawrence | 07-12-2016 | https://www.dropbox.com/s/pyhb55mz65bhe9m/Neil%20Lawrence%20-%20Future%20of%20AI.pdf?dl=0 | |
| Davis on AI capability and motivation | Rob Bensinger | 30-11-2016 | https://www.dropbox.com/s/eatjziiqsj5bmmg/Rob%20Bensinger%20Reply%20to%20Ernest%20Davis.pdf?dl=0 | |
| Ethical guidelines for a Superintelligence | Ernest Davis | 22-11-2016 | https://www.dropbox.com/s/7j14li21igzi5gx/Ethical%20Guidelines%20for%20a%20Superintelligence.pdf?dl=0 | |
| Superintelligence: Chapter 15 | Nick Bostrom | 15-11-2016 | https://www.dropbox.com/s/5jsusue656rdf2r/15%20Crunch%20Time.pdf?dl=0 | |
| Superintelligence: Chapter 14 | Nick Bostrom | 09-11-2016 | https://www.dropbox.com/s/l2myz5c7t3a6at9/14%20Science%20and%20Technology%20Strategy.pdf?dl=0 | |
| Superintelligence: Chapter 11 | Nick Bostrom | 01-11-2016 | https://www.dropbox.com/s/vj9j5saz39ese5i/11%20Multipolar%20Scenarios.pdf?dl=0 | |
| Superintelligence: Chapter 9 (2/2) | Nick Bostrom | 25-10-2016 | https://www.dropbox.com/s/ux66z2ujz9jgofe/9.%20Motivation%20Selection%20Methods.pdf?dl=0 | |
| Superintelligence: Chapter 9 (1/2) | Nick Bostrom | 18-10-2016 | https://www.dropbox.com/s/0mgnqcq075vehfv/Capability%20Control%20Methods.pdf?dl=0 | |
| Superintelligence: Chapter 8 | Nick Bostrom | 11-10-2016 | https://www.dropbox.com/s/ihj35vxbevfghal/Default%20doom.pdf?dl=0 | |
| Superintelligence: Chapter 7 | Nick Bostrom | 04-10-2016 | https://www.dropbox.com/s/pps6di0pza7wvab/The%20superintelligent%20Will.pdf?dl=0 | |
| Superintelligence: Chapter 6 | Nick Bostrom | 27-09-2016 | ||
| Superintelligence: Chapter 5 | Nick Bostrom | 20-09-2016 | ||
| Taxonomy of Pathways to Dangerous Artificial Intelligence | Roman V. Yampolskiy | 13-09-2016 | ||
| Unethical Research: How to Create a Malevolent Artificial Intelligence | Roman V. Yampolskiy | 06-09-2016 | ||
| Superintelligence: Chapter 4 | Nick Bostrom | 30-08-2016 | ||
| Superintelligence: Chapter 3 | Nick Bostrom | 23-08-2016 | ||
| Superintelligence: Chapter 1+2 | Nick Bostrom | 16-08-2016 | ||
| Why I am skeptical of risks from AI | Alexander Kruel | 09-08-2016 | ||
| --Break due to family extension-- | 02-08-2016 | |||
| --Break due to family extension-- | 26-07-2016 | |||
| Intelligence Explosion FAQ | Luke Muehlhauser | 19-07-2016 | ||
| A toy model of the treacherous turn | Stuart Armstrong | 12-07-2016 | ||
| The Fable of the Dragon Tyrant | Nick Bostrom | 05-07-2016 | ||
| The Fun Theory Sequence | Eliezer Yudkowsky | 28-06-2016 | ||
| Intelligence Explosion Microeconomics | Eliezer Yudkowsky | 21-06-2016 | ||
| Strategic Implications of Openness in AI Development | Nick Bostrom | 14-06-2016 | ||
| That Alien Message | Eliezer Yudkowsky | 07-06-2016 | ||
| The Value Learning Problem | Nate Soares | 31-05-2016 | ||
| Decisive Strategic Advantage without a Hard Takeoff | Kaj Sotala | 24-05-2016 | ||