id stringlengths 36 36 | source stringclasses 15
values | formatted_source stringclasses 13
values | text stringlengths 2 7.55M |
|---|---|---|---|
aa32e7fd-56bb-4648-9000-1f9dca6d5ac6 | trentmkelly/LessWrong-43k | LessWrong | Why have insurance markets succeeded where prediction markets have not?
Insurance is big business and is load-bearing for many industries. It has gained popular acceptance. Prediction markets have not. This is despite clear similarities between the two domains.
One can list similarities:
* You are trading financial... |
fb934409-6c2e-417f-90d0-6ae3096c22dd | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post3791
Jaan Tallinn has suggested creating a toy model of the various common AI arguments, so that they can be analysed without loaded concepts like "autonomy", "consciousness", or "intentionality". Here a simple attempt for the " treacherous turn "; posted here for comments and suggestions. Meet agent L. This a... |
8ebff371-9ef6-4da4-8dc8-5a3b4e66daae | trentmkelly/LessWrong-43k | LessWrong | Probability updating question - 99.9999% chance of tails, heads on first flip
This isn't intended as a full discussion, I'm just a little fuzzy on how a Bayesian update or any other kind of probability update would work in this situation.
You have a coin with a 99.9999% chance of coming up tails, and a 100% chance of... |
41671e87-e333-49b3-bc52-fda3c410fdef | trentmkelly/LessWrong-43k | LessWrong | I'm Not An Effective Altruist Because I Prefer...
|
28a1cd65-3b33-4f6a-becb-fdf78e38c476 | trentmkelly/LessWrong-43k | LessWrong | When Most VNM-Coherent Preference Orderings Have Convergent Instrumental Incentives
This post explains a formal link between "what kinds of instrumental convergence exists?" and "what does VNM-coherence tell us about goal-directedness?". It turns out that VNM coherent preference orderings have the same statistical inc... |
d3d0537a-a0a9-44d2-a32e-ed74271b6b43 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | CAIS-inspired approach towards safer and more interpretable AGIs
Epistemic status: a rough sketch of an idea
Current LLMs are huge and opaque. Our interpretability techniques are not adequate. Current LLMs are not likely to run hidden dangerous optimization processes. But larger ones may.
Let's cap the model size at... |
38175680-41aa-4d42-896e-8187addbf4e9 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Perform Tractable Research While Avoiding Capabilities Externalities [Pragmatic AI Safety #4]
*This is the fourth post in* [*a sequence of posts*](https://www.alignmentforum.org/posts/bffA9WC9nEJhtagQi/introduction-to-pragmatic-ai-safety-pragmatic-ai-safety-1) *that describe our models for Pragmatic AI Safety.*
We ar... |
5e3690eb-dcf5-40ce-9511-d7dfb680e897 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Deep Learning Application in Security and Privacy -- Theory and Practice: A Position Paper
1 Introduction
---------------
Computing technology is becoming an integral part of our lives and has many facets ranging from supercomputing (used in weather prediction, cutting-edge research and business automation) to embe... |
281bb35b-bf4e-4e53-a586-5cdba13bec3b | trentmkelly/LessWrong-43k | LessWrong | FTL travel summary
I started writing this 2 years ago, got bored, and never finished. Posting it now just to get it out of my drafts.
Desirability of fast travel
If a superintelligent AI gains power over the physical world, it's very likely to want to expand its influence quickly. This is most obvious for a misalign... |
e3e90322-bec1-47e5-acd8-22d1954d8b3c | trentmkelly/LessWrong-43k | LessWrong | Your transhuman copy is of questionable value to your meat self.
I feel safe saying that nearly everyone reading this will agree that, given sufficient technology, a perfect replica or simulation could be made of the structure and function of a human brain, producing an exact copy of an individual mind including a con... |
055c42f7-4f4e-4266-bc63-5f806e20645c | trentmkelly/LessWrong-43k | LessWrong | [Optimal Philanthropy] Laptops without instructions
Just read this article, which describes a splashy, interesting narrative which jives nicely with my worldview. Which makes me suspicious.
http://dvice.com/archives/2012/10/ethiopian-kids.php
> The One Laptop Per Child project started as a way of delivering technolo... |
34923c06-11bc-4591-b2e0-01a6f7ca891f | trentmkelly/LessWrong-43k | LessWrong | Is Progress Real?
...
I couldn't find the essay “Is Progress Real?” by the historians Will and Ariel Durant (The Lessons of History, 1968) anywhere on the internet so here I am now, posting it on the internet. I think it's a classic, short but filled with provoking ideas and beautiful prose and I don’t think it’s an ... |
a9c99f36-50f5-4608-af9e-59ce5879864b | trentmkelly/LessWrong-43k | LessWrong | [AN #58] Mesa optimization: what it is, and why we should care
Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email.
Hig... |
1db4e567-e472-4ca3-a981-25a78c8d46c2 | trentmkelly/LessWrong-43k | LessWrong | The meta-evaluation question
Evaluation refers to an agent's evaluation of the expected benefit of a particular action; meta-evaluation is the agent's evaluation of the computational cost-effectiveness of evaluation in general.
There are two difficulties with meta-evaluation. One is the nature of the data, which by ... |
e967e4c4-315b-45d5-9628-0dcd54c86710 | trentmkelly/LessWrong-43k | LessWrong | [Linkpost] Critiques of Redwood Research
An anonymous user named Omega posted a critique of Redwood on the EA Forum. The post highlights four main areas: (1) Lack of senior ML staff, (2) Lack of communication & engagement with the ML community, (3) Underwhelming research output, and (4) Work culture issues.
I'm linkp... |
d3305a5a-3d69-4c29-9a5a-5130bd25ceca | trentmkelly/LessWrong-43k | LessWrong | Transformer Debugger
Transformer Debugger (TDB) is a tool developed by OpenAI's Superalignment team with the goal of supporting investigations into circuits underlying specific behaviors of small language models. The tool combines automated interpretability techniques with sparse autoencoders.
TDB enables rapid explo... |
54a2669e-aea5-4d05-ade2-ad999b038f0b | trentmkelly/LessWrong-43k | LessWrong | Temporal allies and spatial rivals
(This post co-authored by Robin Hanson and Katja Grace.)
In the Battlestar Galactica TV series, religious rituals often repeated the phrase, “All this has happened before, and all this will happen again.” It was apparently comforting to imagine being part of a grand cycle of time. I... |
39e8f19c-d540-44b6-acea-dace4b9a7be6 | StampyAI/alignment-research-dataset/special_docs | Other | Power to the People: The Role of Humans in Interactive Machine Learning
Power to the People: The Role of Humans in Interactive Machine Learning Saleema Amershi, Maya Cakmak, W. Bradley Knox, Todd Kulesza1 Abstract Systems that can learn interactively from their end-users are quickly becoming widespread. Until rec... |
4f4d84a5-b44f-438e-a8dd-d6ceb079f9c2 | trentmkelly/LessWrong-43k | LessWrong | Weekly LW Meetups
This summary was posted to LW Main on December 31st. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
* Baltimore Area Meetup: 03 January 2016 02:00PM
The remaining meetups take place in cities with regular scheduling, but involve ... |
99a6e1a6-5e84-47af-8ef1-c679cc5a69f7 | trentmkelly/LessWrong-43k | LessWrong | Yes, Virginia, You Can Be 99.99% (Or More!) Certain That 53 Is Prime
TLDR; though you can't be 100% certain of anything, a lot of the people who go around talking about how you can't be 100% certain of anything would be surprised at how often you can be 99.99% certain. Indeed, we're often justified in assigning odds r... |
95a7b30c-5666-40cf-8766-e010bc46ffef | trentmkelly/LessWrong-43k | LessWrong | On Pruning an Overgrown Garden
As a new user, it's hard to know where to start, and how to contribute to a community. being a Good Samaritan by nationality, I was reading through the guides and posts pertaining to the LessWrong community. One article that stood out to me is the "Well-Kept Gardens Die By Pacifism" post... |
d2d04ae4-885b-4223-a105-c5d54eaef1eb | trentmkelly/LessWrong-43k | LessWrong | The Inner Workings of Resourcefulness
Cross-posted to my personal blog.
Meta: If not for its clumsiness, I would have titled this piece “[some of] the inner workings of resourcefulness”. In other words, I do not want to pretend this piece offers a comprehensive account, but merely partially insights.
About this ti... |
87af6eea-c596-492a-9d51-cc52f58f0bf2 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Behavioural statistics for a maze-solving agent
**Summary:** [Understanding and controlling a maze-solving policy network](https://www.lesswrong.com/posts/cAC4AXiNC5ig6jQnc/understanding-and-controlling-a-maze-solving-policy-network) analyzed a maze-solving agent's behavior. We isolated four maze properties which seem... |
a499fb1c-c6a6-4e03-98dd-9ab54c919bc3 | trentmkelly/LessWrong-43k | LessWrong | Stuxnet, not Skynet: Humanity's disempowerment by AI
Several high-profile AI skeptics and fellow travelers have recently raised the objection that it is inconceivable that a hostile AGI or smarter than human intelligence could end the human race. Some quotes from earlier this year:
Scott Aaronson:
> The causal story... |
eb0c49fb-a875-4d7b-8a6a-757e4f2810a4 | StampyAI/alignment-research-dataset/arbital | Arbital | Safe impact measure
A safe impact measure is one that captures all changes to every variable a human might care about, with no edge-cases where a lot of value could be destroyed by a 'low impact' action. A safe impact measure must also not generate so many false alarms of 'high impact' that no strategy can be disting... |
71ca099e-7a32-486e-af5c-e0a8fe2a7c48 | trentmkelly/LessWrong-43k | LessWrong | Doing Prioritization Better
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in... |
5e73c1cb-c194-493a-af2b-b6de4d621b79 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Brain-over-body biases, and the embodied value problem in AI alignment
Note: This essay was published [here](https://forum.effectivealtruism.org/posts/zNS53uu2tLGEJKnk9/ea-s-brain-over-body-bias-and-the-embodied-value-problem-in) on EA Forum on Sept 21, 2022. The description of the brain-over-body biases in the EA sub... |
56fb13d4-9cef-47b4-b120-e7ae0e6066c9 | trentmkelly/LessWrong-43k | LessWrong | Pitfalls with Proofs
This post distills some excellent writing on decision theory from Abram Demski about spurious proofs and Troll Bridge problems, and it has significant parallels with this work. It recently won a $250 prize in UC Berkeley Effective Altruism's AI safety distillation contest. The goal of this post is... |
19b17d97-5186-4799-99b5-4a3dfe9cab0f | trentmkelly/LessWrong-43k | LessWrong | Map and territory: Natural structures
This will be a very short post which simply defines one term which I find useful when discussing the map and the territory.
I find it very useful to have a term that helps clarify that the map is not completely arbitrary and that there are things in the territory that are natural... |
a01316c7-2280-402a-883e-3cdfab44fe95 | trentmkelly/LessWrong-43k | LessWrong | July 2020 gwern.net newsletter
None |
531ae57f-a26c-49be-954a-cb409a7a7f87 | trentmkelly/LessWrong-43k | LessWrong | Against Almost Every Theory of Impact of Interpretability
Epistemic Status: I believe I am well-versed in this subject. I erred on the side of making claims that were too strong and allowing readers to disagree and start a discussion about precise points rather than trying to edge-case every statement. I also think th... |
91ecf129-1cd0-45c1-8ebf-8d66ee29a1e4 | trentmkelly/LessWrong-43k | LessWrong | No Good Logical Conditional Probability
Fix a theory T over a language L. A coherent probability function is one that satisfies laws of probability theory, each coherent probability function represents a probability distribution on complete logical extensions of T.
One of many equivalent definitions of coherence is t... |
d0841e9a-d59b-4fc0-8b42-23fcf1f67db3 | trentmkelly/LessWrong-43k | LessWrong | Updated Deference is not a strong argument against the utility uncertainty approach to alignment
Thesis: The problem of fully updated deference is not a strong argument against the viability of the assistance games / utility uncertainty approach to AI (outer) alignment.
Background: A proposed high-level approach to A... |
a39a9968-e5d7-4ef6-a893-73cc3451e0a0 | trentmkelly/LessWrong-43k | LessWrong | Playing the game vs. finding a cheat code
This is a linkpost from my blog De Novo.
Imagine a new Pokémon game has just come out, and you really want to catch a Zapdos. It’s listed in the game’s Pokédex, so you know it must be possible to catch, but you’re not sure how.
You could either:
1. Play the game normally. ... |
8f511594-03a1-4ddb-bf8e-e3a54e8459e4 | StampyAI/alignment-research-dataset/blogs | Blogs | This Museum Does Not Exist: GPT-3 x CLIP
---
Table of Contents* [Gallery I](#gallery-i)
+ [`The Death of Archimedes`](#the-death-of-archimedes)
+ [`Still Life with Mirror`](#still-life-with-mirror)
+ [`The Poet's Abbreviated Life`](#the-poets-abbreviated-life)
+ [`Narcissus`](#narcissus)
+ [`Dream of the Last Su... |
015bff50-017e-4511-9557-9b98e27c0c4c | trentmkelly/LessWrong-43k | LessWrong | Is my view contrarian?
Previously: Contrarian Excuses, The Correct Contrarian Cluster, What is bunk?, Common Sense as a Prior, Trusting Expert Consensus, Prefer Contrarian Questions.
Robin Hanson once wrote:
> On average, contrarian views are less accurate than standard views. Honest contrarians should admit this, t... |
5af1adcc-a69d-4cc4-a0cb-5a57c7e2cc05 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | The Isolation Assumption of Expected Utility Maximization
In this short essay I will highlight the importance of what I call the “isolation assumption” in expected utility theory. It may be that this has already been named in the relevant literature and I just don’t know it. I believe this isolation assumption is both... |
2afb728a-d1db-43de-9e27-d4d85bd3d905 | trentmkelly/LessWrong-43k | LessWrong | What is the ground reality of countries taking steps to recalibrate AI development towards Alignment first?
You can possibly put the lid on European AI research, the biggest sign to this is civilian administration oversight over the internet which is incredibly tight compared to European Union's non authoritarian gove... |
b1b6aaaa-9d5d-4bc0-a998-a6bac942b7d1 | trentmkelly/LessWrong-43k | LessWrong | If I knew how to make an omohundru optimizer, would I be able to do anything good with that knowledge?
I'd bet we're going to figure out how to make an omohundro optimiser - a fitness-maximizing AGI - before we figure out how to make AGI that can rescue the utility function, preserve a goal, or significantly optimise ... |
bb08245c-a4a3-416e-ae4a-8694b71aca36 | trentmkelly/LessWrong-43k | LessWrong | Draft report on existential risk from power-seeking AI
I’ve written a draft report evaluating a version of the overall case for existential risk from misaligned AI, and taking an initial stab at quantifying the risk from this version of the threat. I’ve made the draft viewable as a public google doc here (Edit: arXiv ... |
552779a8-51a4-432d-b5a0-53cce72f5714 | trentmkelly/LessWrong-43k | LessWrong | [outdated] My current theory of change to mitigate existential risk by misaligned ASI
Epistemic status: I'm pretty confident about this; looking for feedback and red-teaming. As of 2023-06-30, I notice multiple epistemological errors in this document. I do not endorse the reasoning in it, and while my object-level cl... |
b9dfad95-a134-41a5-aa50-a40131e0337f | trentmkelly/LessWrong-43k | LessWrong | What are good models of collusion in AI?
I'm working on a paper and accompanying blog post examining theories of collusion in the context of oligopolistic firms in economics, to see what those models would say about AI safety scenarios (e.g. values handshakes, acausal negotiation, etc.). I'm very familiar with the eco... |
a319e0c0-c03e-4eb0-92d4-76669fd190cb | trentmkelly/LessWrong-43k | LessWrong | How much might AI legislation cost in the U.S.?
This piece was previously published on my Substack.
Policymakers are rushing to regulate artificial intelligence (AI), but the economic impact of these regulations remains largely unexplored. While the European Union and the United Kingdom have produced cost estimates, ... |
5607c82d-c74d-4643-84cf-a91d2489aaa2 | trentmkelly/LessWrong-43k | LessWrong | What are examples of problems that were caused by intelligence, that couldn’t be solved with intelligence?
Like everyone, AI safety has been on my mind a lot lately. I was thinking about how most problems that are caused by intelligence in the world to-date seem to have always be solved by or have the potential to be ... |
1e13af08-c35f-4780-8229-474baaa3cd44 | trentmkelly/LessWrong-43k | LessWrong | Arbital has been imported to LessWrong
Arbital was envisioned as a successor to Wikipedia. The project was discontinued in 2017, but not before many new features had been built and a substantial amount of writing about AI alignment and mathematics had been published on the website.
If you've tried using Arbital.com t... |
e464eafc-0fb7-412b-8a9c-c6d46af4e5e1 | trentmkelly/LessWrong-43k | LessWrong | [Link] Ideology, Motivated Reasoning, and Cognitive Reflection: An Experimental Study
Related to: Knowing About Biases Can Hurt People
HT: Marginal Revolution
Paper.
> Social psychologists have identified various plausible sources of ideological polarization over climate change, gun violence, national security, an... |
9d9946bb-98d4-45bf-ae7b-138b27dc0ff3 | StampyAI/alignment-research-dataset/special_docs | Other | Non-pharmacological cognitive enhancement
Front Syst Neurosci. 2014; 8: 107. Published online 2014 Jun 11. doi: [10.3389/fnsys.2014.00107](//doi.org/10.3389%2Ffnsys.2014.00107)PMCID: PMC4052735PMID: [24999320](https://pubmed.ncbi.nlm.nih.gov/24999320)Pharmacological cognitive enhancement—how neuroscientific research c... |
be379912-b3c7-4900-b045-e7833223410c | trentmkelly/LessWrong-43k | LessWrong | Meetup : Pittsburgh: Value of Information
Discussion article for the meetup : Pittsburgh: Value of Information
WHEN: 13 November 2012 06:00:00PM (-0800)
WHERE: EatUnique, S Craig St, Pittsburgh
Phone 412-304-6258 if you can't find us
Discussion article for the meetup : Pittsburgh: Value of Information |
095d32c4-137b-4b17-8254-ac414bb815f9 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "(Response to: You cannot be mistaken about (not) wanting to wirehead, Welcome to Heaven)
The Omega CorporationInternal MemorandumTo: Omega, CEOFrom: Gamma, Vice President, Hedonic Maximization
Sir, this concerns the newest product of our Hedonic Maximization Department, the Much-Better-Life Simulator. This revolutiona... |
f06c77e5-0973-4ced-b073-98c03d54c2ce | trentmkelly/LessWrong-43k | LessWrong | Open Thread, Jun. 15 - Jun. 21, 2015
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately... |
b31c93f9-8514-47ef-af49-4cb988536c4f | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post3756
A putative new idea for AI control; index here . This post will look at some of the properties of quantilizers , when they succeed and how they might fail. Roughly speaking, let f be some true objective function that we want to maximise. We haven't been able to specify it fully, so we have instead a proxy... |
3d05f859-551c-4419-97f9-fa4026f01101 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla
*Cross-posting a paper from the Google DeepMind mech interp team, by: Tom Lieberum, Matthew Rahtz, János Kramár, Neel Nanda, Geoffrey Irving, Rohin Shah, Vladimir Mikulik*
.
"Darkness Solstice" probably doesn't make sense for most people. But a thing th... |
37c45df2-4c52-4b3a-b343-d1126c5fa6e8 | trentmkelly/LessWrong-43k | LessWrong | Distributed public goods provision
Most people benefit significantly from privately funded public goods (e.g. Wikipedia).
If we all contribute to such public goods, then we can all end up better off. But as an individual it’s almost never a good return on investment. I think of supporting such public goods as being a... |
b17c566c-7ec2-48a6-ac78-d544450fe2cd | trentmkelly/LessWrong-43k | LessWrong | Chicago Meetup: Sunday, August 1 at 2:00 pm
We’re holding the Chicago meetup discussed here on Sunday, August 1, 2010 at 2:00 pm. The tentative location is the Corner Bakery at the corner of State and Cedar (1121 N. State St.), but we’re also happy to move the meetup further up to the North side as has been previously... |
26be218e-0468-4186-8c63-c33d6a842442 | trentmkelly/LessWrong-43k | LessWrong | Twin Peaks: under the air
Content warning: low content
~ Feb 2021
The other day I decided to try imbibing work-relevant blog posts via AI-generated recital, while scaling the Twin Peaks—large hills near my house in San Francisco, of the sort that one lives near and doesn’t get around to going to. It was pretty stran... |
7e9a1510-d057-43fc-b304-1aa1950295e8 | StampyAI/alignment-research-dataset/arxiv | Arxiv | MultiXNet: Multiclass Multistage Multimodal Motion Prediction
I Introduction
---------------
Predicting the future states of other actors such as vehicles, pedestrians, and bicyclists represents a key capability for self-driving technology.
This is a challenging task, and has been found to play an important role in... |
e740e754-3d18-4c03-8a2b-3ce64c1ada24 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence
**Abstract**: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way t... |
f7a6a5b8-06bd-4af0-882c-22a36fd879ff | trentmkelly/LessWrong-43k | LessWrong | An adversarial example for Direct Logit Attribution: memory management in gelu-4l
Please check out our notebook for figure recreation and to examine your own model for clean-up behavior.
Produced as part of ARENA 2.0 and the SERI ML Alignment Theory Scholars Program - Spring 2023 Cohort
Fig 5: Correlation between DL... |
52cd78f7-60c4-452e-a877-101df5c65add | trentmkelly/LessWrong-43k | LessWrong | The Game of Masks
Epistemic Status: Endorsed
Content Warning: Antimemetic Biasing Hazard, Debiasing hazard, Commitment Hazard
Part of the Series: Open Portals
Author: Octavia
0.
So Scott has been talking about Lacan lately and I’ve been honestly pretty impressed by how hard he seemed to bounce off it. Impressed enou... |
d4c86ac4-85ac-431f-a7d1-e4e817f84292 | trentmkelly/LessWrong-43k | LessWrong | Unifying the Simulacra Definitions
Epistemic Status: Confident this is the perspective I find most useful. This is intended to both be a stand-alone post and to be the second post in the Simulacra sequence, with the first being Simulacra Levels and their Interactions. It should be readable on its own, but easier havin... |
d6f3d037-4918-4826-b4f8-cb9be9278fd3 | trentmkelly/LessWrong-43k | LessWrong | The Controls are Lying: A Note on the Memetic Hazards of Video Games [Link]
Chris Pruett writes on the Robot Invader blog:
> Good player handling code is often smoke and mirrors; the player presses buttons and sees a reasonable result, but in between those two operations a whole lot of code is working to ensure that ... |
d61d4048-cb91-42c4-b2fb-080d9438e9ae | trentmkelly/LessWrong-43k | LessWrong | Reinforcement Learning Study Group
Hey everyone,
my name is Kay. I'm new to the forum and I came here with a specific goal in mind:
> I'm putting together a crew for a reinforcement learning study group.
Main objectives:
* Mathematical Foundations: We will work through key passages in Sutton & Barto's Boo... |
293389b2-15de-4d8e-bb55-dc564293b158 | trentmkelly/LessWrong-43k | LessWrong | How long does it take to become Gaussian?
The central limit theorems all say that if you convolve stuff enough, and that stuff is sufficiently nice, the result will be a Gaussian distribution. How much is enough, and how nice is sufficient?
Identically-distributed distributions converge quickly
For many distributio... |
91296907-0bab-49fa-8eb3-00dc74a64bfc | trentmkelly/LessWrong-43k | LessWrong | On oxytocin-sensitive neurons in auditory cortex
(For the big picture of how I wound up on this topic, see Symbol Grounding and Human Social Instincts. But I wound up feeling like oxytocin-sensitive neurons in auditory cortex are NOT an important piece of that particular puzzle.)
(I tried to minimize neuroscience jar... |
2ae9173a-c797-4e29-96dd-4f93c41b5675 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] The Conscious Sorites Paradox
Today's post, The Conscious Sorites Paradox was originally published on 28 April 2008. A summary (taken from the LW wiki):
> Decoherence is implicit in quantum physics, not an extra law on top of it. Asking exactly when "one world" splits into "two worlds" may be like aski... |
5db727b2-0f33-4744-8ffa-6df540530d6b | trentmkelly/LessWrong-43k | LessWrong | Anti-Pascaline satisficer
It occurred to me that the anti-Pascaline agent design could be used as part of a satisficer approach.
The obvious thing to reduce dangerous optimisation pressure is to make a bounded utility function, with an easily achievable bound. Such as giving them a utility linear in paperclips that m... |
71a73cf0-8d0c-45dd-bef6-b72e6170da5e | trentmkelly/LessWrong-43k | LessWrong | Of arguments and wagers
Automatically crossposted from ai-alignment.com
(In which I explore an unusual way of combining the two.)
Suppose that Alice and Bob disagree, and both care about Judy’s opinion. Perhaps Alice wants to convince Judy that raising the minimum wage is a cost-effective way to fight poverty, an... |
2d8219e5-0c82-4042-907b-c0dfb14d99a8 | trentmkelly/LessWrong-43k | LessWrong | Ranked Choice Voting is Arbitrarily Bad
Cross posting from https://applieddivinitystudies.com/2020/09/02/ranked-bad/
Recently, there's been headway in adopting Ranked-Choice Voting, used by several states in the 2020 US Democratic presidential primaries and to be adopted by New York City in 2021.
For all its virtues... |
97c85f9e-9d83-4412-bb69-813f607b8804 | trentmkelly/LessWrong-43k | LessWrong | A voting theory primer for rationalists
What is voting theory?
Voting theory, also called social choice theory, is the study of the design and evaulation of democratic voting methods (that's the activists' word; game theorists call them "voting mechanisms", engineers call them "electoral algorithms", and political sci... |
d0f67043-42ba-4858-80e0-65c382c42d29 | trentmkelly/LessWrong-43k | LessWrong | Meetup : San Diego experimental meetup
Discussion article for the meetup : San Diego experimental meetup
WHEN: 15 January 2012 01:00:00PM (-0800)
WHERE: 6380 Del Cerro Blvd. San Diego, CA 92120
We're having a meetup in our usual haunt on Sunday, January 15th at 1pm. Food and drink are available for purchase, though... |
5edb1391-fe5a-4428-87a3-b50076692114 | trentmkelly/LessWrong-43k | LessWrong | How I Think, Part Three: Weighing Cryonics
These are things I've been thinking about when trying to decide if I should get cryonics. I've sometimes gotten the sense that some rationalists think this is a no-brainer. But since it's a $100k+ decision, I think it deserves quite a bit of thinking. So here is my list of th... |
a9aba7ff-3d8b-4e9e-b741-34823b0d2387 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Buffalo Meetup
Discussion article for the meetup : Buffalo Meetup
WHEN: 17 February 2013 04:00:00PM (-0500)
WHERE: SPOT Coffee Delaware Ave & W Chippewa St, Buffalo, NY
(Apologies, for the short notice.) Last meetup we talked about making sure your beliefs "pay rent " by constraining anticipation. This tim... |
97f71bd2-b9dd-48d1-a2a9-a5833e4c9489 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | [Linkpost] Given Extinction Worries, Why Don’t AI Researchers Quit? Well, Several Reasons
I've written a blog post for a lay audience, explaining some of the reasons that AI researchers who are concerned about extinction risk have for continuing to work on AI research, despite their worries
The apparent contradiction... |
6dce6b88-2e01-4d8b-a38d-50f8ff633476 | trentmkelly/LessWrong-43k | LessWrong | AI Safety field-building projects I'd like to see
People sometimes ask me what types of AIS field-building projects I would like to see.
Here’s a list of 11 projects.
Background points/caveats
But first, a few background points.
1. These projects require people with specific skills/abilities/context in order for ... |
212be483-ff96-4408-a3ee-c235dd9072da | trentmkelly/LessWrong-43k | LessWrong | Meetup : West LA Meetup - The Resistance (Game)
Discussion article for the meetup : West LA Meetup - The Resistance (Game)
WHEN: 11 April 2012 07:00:00PM (-0700)
WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064
When: 7:00pm - 9:00pm Wednesday, April 11th.
Where: The Westside Tavern in the upstairs Wine Bar (all ... |
aa1d9717-5e71-43ce-8763-9c328e2ff29e | trentmkelly/LessWrong-43k | LessWrong | Alignment is hard. Communicating that, might be harder
Crossposted from the EA Forum: https://forum.effectivealtruism.org/posts/PWKWEFJMpHzFC6Qvu/alignment-is-hard-communicating-that-might-be-harder
Note: this is my attempt to articulate why I think it's so difficult to discuss issues concerning AI safety with n... |
39573a2e-866f-41a5-97c5-62a5839dab60 | trentmkelly/LessWrong-43k | LessWrong | SAE feature geometry is outside the superposition hypothesis
Written at Apollo Research
Summary: Superposition-based interpretations of neural network activation spaces are incomplete. The specific locations of feature vectors contain crucial structural information beyond superposition, as seen in circular arrangemen... |
1319e381-4e0b-4eec-96de-789a44a7ea8a | trentmkelly/LessWrong-43k | LessWrong | Implementing CDT with optimal predictor systems
We consider transparent games between bounded CDT agents ("transparent" meaning each player has a model of the other players). The agents compute the expected utility of each possible action by executing an optimal predictor of a causal counterfactual, i.e. an optimal pr... |
0fcc774d-677a-45b1-ab6f-c39008cc106b | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Wittgenstein and ML — parameters vs architecture
Status: a brief distillation of Wittgenstein's book *On Certainty*, using examples from deep learning and GOFAI, plus discussion of AI alignment and interpretability.
---
> "That is to say, the questions that we raise and our doubts depend on the fact that some pro... |
613ff47b-d96e-4b1d-871b-aff351fbf7e1 | StampyAI/alignment-research-dataset/arxiv | Arxiv | AI Safety and Reproducibility: Establishing Robust Foundations for the Neuropsychology of Human Values
I Anthropomorphic Design of Superintelligent AI Systems
--------------------------------------------------------
There has been considerable discussion in recent years about the consequences of achieving human-lev... |
be6c159f-a60b-4b30-8b6a-16c3ec16fa3a | StampyAI/alignment-research-dataset/blogs | Blogs | Are AI surveys seeing the inside view?
*By Katja Grace, 15 January 2015*
An interesting thing about the [survey data](http://aiimpacts.wpengine.com/ai-timeline-surveys/ "AI Timeline Surveys") on timelines to human-level AI is the apparent incongruity between answers to ‘[when](http://aiimpacts.wpengine.com/muller-an... |
df909809-75e2-4555-912d-74f9921c5f31 | StampyAI/alignment-research-dataset/arxiv | Arxiv | A Formal Approach to the Problem of Logical Non-Omniscience
1 Introduction
---------------
Every student of mathematics has experienced uncertainty about conjectures for which there is “quite a bit of evidence”, such as the Riemann hypothesis or the twin prime conjecture. Indeed, when Zhang [[52](#bib.bib52)] prove... |
9b53361c-2968-4a16-9778-e5e4f68babc8 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Safety Implications of LeCun's path to machine intelligence
Yann LeCun recently posted [A Path Towards Autonomous Machine Intelligence](https://openreview.net/forum?id=BZ5a1r-kVsf), a high-level description of the architecture he considers most promising to advance AI capabilities.
This post summarizes the architect... |
d12cf410-6589-405f-9994-70a21f48032e | trentmkelly/LessWrong-43k | LessWrong | Progress links and tweets, 2023-03-08
The Progress Forum
* Derek Thompson interviews Patrick Collison on progress
Opportunities
* Essay contest: “How could science be different?” (via @moreisdifferent)
Marc Andreessen is blogging again
* “What’s my hope? To show you that we live in a more interesting world than... |
43b722a3-fbbf-47c4-83e7-e6bb2da8d876 | trentmkelly/LessWrong-43k | LessWrong | Artificial explosion of the Sun: a new x-risk?
Bolonkin & Friedlander (2013) argues that it might be possible for "a dying dictator" to blow up the Sun, and thus destroy all life on Earth:
> The Sun contains ~74% hydrogen by weight. The isotope hydrogen-1 (99.985% of hydrogen in nature) is a usable fuel for fusion th... |
a842f82a-2c78-4aca-a4df-cf5f16f74ee5 | trentmkelly/LessWrong-43k | LessWrong | GPT-7: The Tale of the Big Computer (An Experimental Story)
In the not-too-distant future, a remarkable transformation took place. The world had seen the rise and fall of many technologies, but none as impactful as the data processing machines. These machines, born from the marriage of silicon and code, were not just ... |
82b2b74d-5c1f-42b8-b1a5-6e166ba80c8e | trentmkelly/LessWrong-43k | LessWrong | Draft papers for REALab and Decoupled Approval on tampering
Hi everyone, we (Ramana Kumar, Jonathan Uesato, Victoria Krakovna, Tom Everitt, and Richard Ngo) have been working on a strand of work researching tampering problems, and we've written up our progress in two papers. We're sharing drafts in advance here becaus... |
d327a72f-a4e5-40ee-85f9-c106eb020477 | trentmkelly/LessWrong-43k | LessWrong | MDP models are determined by the agent architecture and the environmental dynamics
Seeking Power is Often Robustly Instrumental in MDPs relates the structure of the agent's environment (the 'Markov decision process (MDP) model') to the tendencies of optimal policies for different reward functions in that environment (... |
c8108634-890b-43f4-a974-5016d8fba722 | trentmkelly/LessWrong-43k | LessWrong | European Community Weekend 2018 Announcement
We are excited to announce this year's European LessWrong Community Weekend. For the fifth time, rationalists from all over Europe (and some from outside Europe) are gathering in Berlin to socialize, have fun, exchange knowledge and skills, and have interesting discussions.... |
6a99c5ae-acb2-4967-ba8a-d5b3a811c87d | trentmkelly/LessWrong-43k | LessWrong | Rational Humanist Music
Edit: Since posting this, I've gone on to found a rationalist singalong holiday and get an album produced, available at humanistculture.bandcamp.com
Something that's bothered me a lot lately is a lack of good music that evokes the kind of emotion that spiritually-inspired music does, but whose... |
14cfcbb0-f9ee-4d87-b171-e2242b240326 | trentmkelly/LessWrong-43k | LessWrong | In Defense of the Fundamental Attribution Error
The Fundamental Attribution Error
Also known, more accurately, as "Correspondence Bias."
http://lesswrong.com/lw/hz/correspondence_bias/
The "more accurately" part is pretty important; bias -may- result in error, but need not -necessarily- do so, and in some cases may ... |
8e680fca-9062-44e7-91e4-978dddc821a8 | trentmkelly/LessWrong-43k | LessWrong | An extension of Aumann's approach for reducing game theory to bayesian decision theory to include EDT and UDT like agents
Aumann in Correlated Equilibrium as an Expression of Bayesian Rationality developed a formalism to reduce nashian game theory to bayesian decision making in a multi-agent setting and proved within ... |
f955ee7c-0062-4560-9372-5982f5dd23ef | StampyAI/alignment-research-dataset/blogs | Blogs | Are we "trending toward" transformative AI? (How would we know?)
*Audio also available by searching Stitcher, Spotify, Google Podcasts, etc. for "Cold Takes Audio"*
[Today’s world
Transformative AI
Digital people
World of
Misaligned AI
World run by
Something else
or
or
Stable, galaxy-wide
civil... |
d27dfae2-7db9-4045-803f-3790e77dd3fa | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Is the orthogonality thesis at odds with moral realism?
Continuing [my](/r/discussion/lw/iwy/why_didnt_people_apparently_understand_the/) [quest](/lw/iza/no_universally_compelling_arguments_in_math_or/) to untangle people's confusions about Eliezer's metaethics... I've started to wonder if maybe some people have the i... |
08b9bcb7-7f28-4162-b7e8-57252012510b | trentmkelly/LessWrong-43k | LessWrong | Charter Cities: why they're exciting & how they might work
Hello! What follows is a work-in-progress script about the idea of Charter Cities, which the EA-adjacent youtube channel RationalAnimations plans to animate soon. I want to make sure I'm presenting the idea of charter cities properly and in a compelling, unde... |
d4e3cfb1-c980-40bd-8169-029f8bd5604b | trentmkelly/LessWrong-43k | LessWrong | How not to be a Naïve Computationalist
Meta-Proposal of which this entry is a subset:
The Shortcut Reading Series is a series of less wrong posts that should say what are the minimal readings, as opposed to the normal curriculum, that one ought to read to grasp most of the state of the art conceptions of humans about... |
9e7360ed-57b7-4509-9910-ece4d0eab005 | StampyAI/alignment-research-dataset/blogs | Blogs | New funding for AI Impacts
*By Katja Grace, 4 July 2015*
AI Impacts has received two grants! We are grateful to the [Future of Humanity Institute](http://fhi.ox.ac.uk) (FHI) for $8,700 to support work on the project until September 2015, and the [Future of Life Institute](http://futureoflife.org) (FLI) for $49,310 f... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.