•
Hover cursor over text of **this color** to see a popup reference for the symbol or name. Click elsewhere to make the popup disappear.

• Expand and collapse sections of the argument using the , , and other buttons.

• The axioms used to prove a given lemma (including the main Goal) are listed before the lemma or as children of the lemma. The ones listed before are those that define or introduce symbols that are used in the statement of the lemma (i.e. the ones that are required for understanding the lemma).

• An**Assertion** is an axiom that I expect to be uncontroversial *relative* to the axioms designated **Assumption**. It is a vague and subjective designation. If the language interpretation guide entries (the informal textual semantics) for the symbols that the axiom depends on do not make it clear to you that this axiom should be true, then you should criticize this designation. Depending on the specifics of your criticism, I can respond by improving the entries in the language interpretation guide for some of those symbols, or else by changing the designation to "Assumption" or "Claim".

• A**Definitional Axiom** is an axiom that should be be *absolutely* uncontroversial among the intended audience for this argument. Usually, if not always, it will have a very objective/mathematical character.

•**✓Lemma** and **✓Goal** denote sentences that have been formally proved (computer checkable) to be logical consequences of the definitions, axioms and lemmas that precede them, and the definitions, axioms and lemmas that are their children (immediate descendants only). A **Lemma** has an informal proof. A **Claim** is a statement that is used as a Lemma but has neither been formally nor informally proved.

• A**Defn~** introduces a symbol together with a sentence that intuitively defines it, but does not technically yield a definitional extension of the theory. Usually such symbols could be turned into proper definitions with the use of a definite description operator.

• Expand and collapse sections of the argument using the , , and other buttons.

• The axioms used to prove a given lemma (including the main Goal) are listed before the lemma or as children of the lemma. The ones listed before are those that define or introduce symbols that are used in the statement of the lemma (i.e. the ones that are required for understanding the lemma).

• An

• A

•

• A

Throughout this argument, S, NS, AS, PAS abbreviate 'suicide', 'no suicide', 'assisted suicide', 'physician-assisted suicide'

This is an argument for favouring the introduction of legislation that founds an administrative system (here broadly-defined) for granting Canadians access to physician-assisted suicide in limited cases. Before getting into the specifics, I want to discuss some important differences between this argument and (purely) natural language, non-deductive arguments also in favour of physician-assisted suicide (PAS).

The conclusion of this argument is weaker than the conclusions of the non-deductive arguments. In particular, the advocated (underspecified) PAS system is under-inclusive: it would not grant access to all people who I (or almost any proponent of physician-assisted suicide) think should have access. I am prepared to argue, informally, that this --and by "this" I mean under-inclusiveness in general, not the particular kind/degree of under-inclusive that the system involved here has-- that this is a necessary feature of any sufficiently-rigorous argument that does not make stronger assumptions than this one. More specifically, for any argument

- whose assumptions do not preclude the possibility of
*regrettable*uses of physician-assisted suicide (see ‹regrettable PAS›), and - that does not make strong assumptions along the lines of "regrettable assisted suicides aren't that bad", and
- whose assumptions do not guarantee that the advocated administrative system for PAS will have access to additional information about applicants,

there will be applicants who *should*, in a subjective moral sense according to proponents like me, have access to PAS, but who are indistinguishable from (hypothetical) applicants whose assisted suicide would be regrettable.

Even so, one could argue that this system is *especially* or *unnecessarily* under-inclusive, and that I won't dispute. This argument should be taken as a proof-of-concept. A more-serious attempt would have to involve much more research about this particular issue (e.g. data from the history of the systems in places like the Netherlands and Oregon) than I can afford to do for my thesis. A recent source of such information and much more is Quebec's Committee report on dying with dignity.

At this point I could enumerate the ways that existing non-deductive/natural language arguments use stronger assumptions than those employed here. But here is a method that you can use to collect most of those ways yourself: for each Assumption and Simplifying Assumption, consider how one could use natural language to hide the details of the problem that the assumption addresses. For example, I will spend a surprising amount of time delineating the things that I consider possibly-pertinent to the decision of whether or not to support the legislation. I have never seen this done in such a clear and explicit way in informal arguments; often the best one can hope for is that it can be unambiguously gleaned from reading nearly the *entire* text of the argument (which is book-length in some cases, e.g. the committee report linked to in the previous paragraph).
Another example is the unusually-specific weighing of the (sometimes theoretical) pros and cons of such legislation; in this argument I go a level deeper than any natural language argument I have seen in the justification for the main required subjective proposition ✓Lemma 74.

There may already exist a natural language argument about PAS that is every bit as fair and disciplined as the one I give here (the best *concise* argument I've read was written by the supreme court justices in the minority opinion for the Sue Rodriguez case). A similar thing can be said for any proposed prescription or standard; always there are actors in the scope of the prescription that have come to comply with it on their own. In the case of maximally fair/disciplined argumentation, like in the case of prescriptions for limiting pollution, or the protection of human rights, etc, those actors should be applauded, but their existence should not detract from the importance of the prescription unless they are sufficiently common (where "sufficiently common" is relative: we would need them to be very common in the case of pollution, or universal in the case of human rights).

First, a person must opt-in before being diagnosed with their terminal illness. The purpose of this feature is to lower the (already low) incidence of some forms of *difficult instances of regrettable assisted suicides* (‹difficult-regrettable PAS›), which are regrettable assisted suicides in which

- The applicant is correctly diagnosed; or the applicant is incorrectly diagnosed, but nonetheless suffers from an illness that is terminal without treatment, and all methods of diagnosis have been exhausted (see ‹PAS with wrong and substandard diagnosis›).
- The person who uses the lethal drugs is the person who applied for them (see ‹PAS of wrong person›).
- The applicant uses the drugs with the intention of dying.
- The applicant was not actively coerced into applying for and/or taking the drugs (see ‹coerced suicide›).

Second, a person must take the drugs in the presence of their physician. This is not ideal, and is not required in Oregon, for example, but it is useful for justifying the assumption that with very high probability, the negations of items 2 and 3 above never happen.

Third, there is a fixed list of eligible terminal diseases, which includes only those for which it can be established with "high" probability that a person has the disease and will "very likely" not live for more than X months. Each of "high", "very likely" and X can differ for different patients; the first two only need to be high enough and the third small enough that the patient is unwilling to try to beat the doctors' reported odds.Interesting note: ALS, the condition that Sue Rodriguez and Gloria Taylor suffered from, is regarded as having no definite diagnoisis. It is always diagnosed in part by excluding other conditions. It can be falsely diagnosed, for example, when the real disease is myasthenia gravis, which is treatable.

Fourth, in some parts of Canada, prosecution for illegal assisted suicide is rare relative to its incidence. This must remain the case even after a PAS law is passed. This is to justify the assumption that ‹S, NS because of incr fear of prosecution› happens to no one with high probability (in Assumption 2). In particular, a person answering a request to assist another person in dying, where the second person is ineligible for legal PAS, should not be, or perceive to be, at *increased* risk of prosecution, compared to the risk before the law is passed.

Fifth, to justify the assumption that ‹NS, NS & worse palliative care› happens to no one with high probability (in Assumption 2), the government will monitor private donations towards palliative care and research on palliative care, and if the appropriately-adjusted funds per person drops below what it was before the PAS law was passed, the government will increase public funding to compensate.

Sixth, to support all of Assumption 2 through Assumption 6, since all can be broken by incompetence, the government will adopt one of the provisions from Quebec's bill 52, which they summarize as follows:

A commission on end-of-life care is established under the name “Commission sur les soins de fin de vie”, as well as rules with respect to its composition and operations. The mandate of the Commission is to examine all matters relating to end-of-life care and to oversee the application of specific requirements relating to medical aid in dying.

The person's death is not a suicide in the status-quo future or the assisted suicide future, and the quality of palliative care available to them is worse in the assisted suicide future compared to in the status-quo future.The problem is that

- extremely unlikely Δ's don't happen to anyone.
- there are slightly-pessimistic counts (for proponent of PAS), relative to the expected values determined by 𝒟, for Δ's that are favourable for ASF.
- there are slightly-optimistic counts (for opponents of PAS), relative to the expected values determined by 𝒟, for Δ's that are favourable for SQF.

- Semantically define a number of IFR's (i.e. introduce them as primitive symbols, with language interpretation guide entries).
- State uncontroversial axioms about relations between the IFR's.
- Syntactically define a number of Δ's in terms of IFR's, and semantically define a few more.
- Derive some statements relating the Δ's (using relations between IFR's), and assume some others as uncontroversial axioms.
- Define the pair of typical futures $\vec{F}_{\text{typical}}$ = ‹SQF, ASF› as discussed above.
- Syntactically define sets of people P₁,…,Pₙ in terms of satisfying various Δ's with respect to $\vec{F}_{\text{typical}}$.
- Derive some subset, disjointness and emptiness relations between the Pᵢs (from relations between the Δ's). Also make assumptions, some of them controversial, about relations between some of the other Pᵢs.
- Make subjective, controversial assumptions of this form: "The change in moving from SQF to ASF in the quality-of-life for the people in Pᵢ (considered as a group- not individually) is worse/better/approximately the same as the change for the people in Pⱼ."
- State easily-agreed-upon axioms that suffice to derive, from the other axioms and lemmas, that the change in moving from SQF to ASF in the quality of life for the set of ‹all people› is positive overall.

Variables φ, φ₁, φ₂, φ₃, ψ, ψ₁, ψ₂ are reserved for sort IFR.

Variables δ, δ₁, δ₂, δ₃, δ₄ are reserved for sort Δ.

Variable p is reserved for sort People.

Variables F, F₁, F₂, F₃, F₄ are reserved for sort ◊Future.

Variables ${\vec{F}}$, ${\vec{F}_{1}}$, ${\vec{F}_{2}}$ are reserved for sort ◊FuturePair.

Variables E, E₁, E₂ are reserved for sort ◊FuturePairEC.

Variables P, Q, P₁, P₂, P₃, P₄ are reserved for sort FinSet(People).

Variables Y, Y₁, Y₂, Y₃ are reserved for sort MultiSet(Δ).

Sort op FinSet - Finite subsets of the given sort.

Sort op MultiSet - Finite multi-sets (unordered lists) of the given sort.

Sort op Distr

If the (interpretation of the) given sort S is finite, then it is the set of probability distributions over S. When S is not finite, you may define Distr(S) however you like (e.g. arbitrarily make it a singleton set whose member is not in the interpretation of any other sort symbol.)

Sort ℝ^{≥0} - Nonnegative reals

Sort ℕ - Natural numbers

ceil, floor : ℝ^{≥0} → ℕ - Ceiling and floor functions

< : ℝ^{≥0} × ℝ^{≥0} → 𝔹

Defn ≥ : ℝ^{≥0} × ℝ^{≥0} → 𝔹 - (x ≥ y) ⇔ (¬(x < y) ∧ x ≠ y)

Defn > : ℝ^{≥0} × ℝ^{≥0} → 𝔹 - (x > y) ⇔ (y < x)

Sort People

The set of residents of Canada. For concreteness, let's say the residents who are alive sometime during 1993 (the year of Sue Rodriguez's supreme court hearing) or later.

Sort ◊Future

Conceivable possible futures. This set is loosely defined. The main requirement, implicitly imposed by RelApp_{IFR} and RelApp_{Δ}, is that each *element* is defined precisely enough that any IFR relation can be evaluated on it (and similarly for Δ relations).

Sort IFR**I**ndividual-**F**uture **R**elation. An element is a relation on People × ◊Future.

Sort ◊FuturePair - This is ◊Future × ◊Future.

Sort Δ

Individual-future-difference relation. An element is just a relation on People × ◊FuturePair = People × ◊Future × ◊Future, which is accessed via RelApp_{Δ}.

Sort ◊FuturePairEC

Equivalence classes of ◊Future pairs. Effectively defined by ec and rep.

Sort FinSet(People)

Sort MultiSet(Delta)

Sort Distr(FPairEC)

‹should pass› : → 𝔹

True if you think that legislation should be passed that introduces an administrative system for assisted suicide that is compliant with the description at the top of this argument.

·[1] : ◊FuturePair → ◊Future

·[2] : ◊FuturePair → ◊Future

Defn~ ❬·,·❭ : ◊Future × ◊Future → ◊FuturePair - (∀${\vec{F}}$. ❬${\vec{F}}$[1],${\vec{F}}$[2]❭ = ${\vec{F}}$) ∧ (∀F₁,F₂,F₃,F₄. ❬F₁,F₂❭ = ❬F₃,F₄❭ ⇔ (F₁ = F₃ ∧ F₂ = F₄)) - Pairing operator for ◊Futures.

∈ : People × FinSet(People) → 𝔹 - Set membership for FinSet(People)

Defn ∉ : People × FinSet(People) → 𝔹 - ∀p,P. p ∉ P ⇔ ¬(p ∈ P)

Defn~ ∅ : FinSet(People) - ∀p. p ∉ ∅

RelApp_{IFR} : IFR × People × ◊Future → 𝔹

When φ is an IFR relation, p is from People, and F is from ◊Future, then RelApp_{IFR}(φ, p, F) means φ is true for p in F. The second order syntax φ(p, F) is used to display RelApp_{IFR}(φ, p, F).

RelApp_{Δ} : Δ × People × ◊FuturePair → 𝔹

When δ is a Δ relation, p is from People, and ${\vec{F}}$ is from ◊FuturePair, then RelApp_{Δ}(δ, p, ${\vec{F}}$) means δ is true for p in ${\vec{F}}$. The second order syntax δ(p, ${\vec{F}}$) is used to display RelApp_{Δ}(δ, p, ${\vec{F}}$).

Definitional Axiom 1: ∀P₁,P₂. P₁ = P₂ ⇔ (∀p. p ∈ P₁ ⇔ p ∈ P₂)

Definitional Axiom 2: ∀φ₁,φ₂. φ₁ = φ₂ ⇔ (∀p. ∀F. φ₁(p, F) ⇔ φ₂(p, F))

Definitional Axiom 3: ∀δ₁,δ₂. δ₁ = δ₂ ⇔ (∀p,${\vec{F}}$. δ₁(p, ${\vec{F}}$) ⇔ δ₂(p, ${\vec{F}}$))

Δof : People × ◊FuturePair → Δ

δ = Δof(p,${\vec{F}}$) is the unique most-specific/smallest δ:Δ such that δ(p,${\vec{F}}$).
For a given ◊FuturePair ${\vec{F}}$, the differences (that you think are sufficiently-relevant to this debate) between a person p's experiences in the two
futures -directed differences, the change *from* the first element of ${\vec{F}}$ *to* the second element of ${\vec{F}}$- are given
by Δof(p,${\vec{F}}$).

ΔsOf : FinSet(People) × ◊FuturePair → MultiSet(Δ)

If P is a set of People, then ΔsOf(P,${\vec{F}}$) is the multiset of the same size |P| obtained by applying ΔsOf(·,${\vec{F}}$) to P.

≻_{Δ} : MultiSet(Δ) × MultiSet(Δ) → 𝔹

A subjective comparison relation. If Y₁ ≻_{Δ} Y₂ then you would rather have the life-experience changes Y₁ happen to |Y₁| random people than have the life-experience changes Y₂ happen to |Y₂| random people. Is a strict partial order.

≈ : MultiSet(Δ) × MultiSet(Δ) → 𝔹

A subjective equivalence relation. If Y₁ ≈ Y₂ then you are impartial or cannot decide whether you would rather have the life-experience changes Y₁ happen to |Y₁| random people or the life-experience changes Y₂ happen to |Y₂| random people.

Defn ≿_{Δ} : MultiSet(Δ) × MultiSet(Δ) → 𝔹 - ∀Y₁,Y₂. Y₁ ≿_{Δ} Y₂ ⇔ (Y₁ ≻_{Δ} Y₂ ∨ Y₁ ≈ Y₂)

Defn~ ec : ◊FuturePair → ◊FuturePairEC - ∀${\vec{F}_{1}}$,${\vec{F}_{2}}$. ec(${\vec{F}_{1}}$) = ec(${\vec{F}_{2}}$) ⇔ (∀p,δ. δ(p, ${\vec{F}_{1}}$) ⇔ δ(p, ${\vec{F}_{2}}$)) - Equivalence class of the given ◊FuturePair. See description of ◊FuturePairEC.

Defn~ rep : ◊FuturePairEC → ◊FuturePair - ∀E. ec(rep(E)) = E

A representative for the given equivalence class. rep(E) can be any ${\vec{F}}$:◊FuturePair such that ec(${\vec{F}}$) = E.

Defn swap : ◊FuturePair → ◊FuturePair - ∀${\vec{F}}$. swap(${\vec{F}}$) = ❬${\vec{F}}$[2],${\vec{F}}$[1]❭

Defn~ pair_{IFR} : IFR × IFR → Δ - ∀p,${\vec{F}}$,φ,ψ. pair_{IFR}(φ, ψ)(p, ${\vec{F}}$) ⇔ (φ(p, ${\vec{F}}$[1]) ∧ ψ(p, ${\vec{F}}$[2]))

The Δ relation obtained in the natural way from two IFR relations; the first (resp. second) IFR relation is applied to the first (resp. second) element of the given ◊FuturePair. Most elements of Δ can be defined in the way. An example of an exception is ‹S, NS & worse palliative care›, because ``worse palliative care'' depends on both elements of the ◊FuturePair. Such a Δ relation could be defined by a family of IFR-pairs, although that would appear to require quantifying the quality of palliative care, which, if done right (in a way that is satisfactory to everyone), introduces an unnecessary abstract entity into the argument, namely the partially-ordered set of quantities.

Defn~ peopleIn_{IFR} : IFR × ◊Future → FinSet(People) - ∀φ,F,p. p ∈ peopleIn_{IFR}(φ, F) ⇔ φ(p, F)

Defn~ peopleIn_{Δ} : Δ × ◊FuturePair → FinSet(People) - ∀δ,${\vec{F}}$,p. p ∈ peopleIn_{Δ}(δ, ${\vec{F}}$) ⇔ δ(p, ${\vec{F}}$)

Defn~ 0 : MultiSet(Δ) - ∀${\vec{F}}$. 0 = ΔsOf(∅, ${\vec{F}}$)

Empty multiset that corresponds to a 'neutral' judgement; if ΔsOf(P,${\vec{F}}$) = 0, then for the set of people P, the pair of futures ${\vec{F}}$ are, overall, of equal value with respect to the Δ relations in use.

Defn~ ‹true› : IFR - ∀F. ∀p. ‹true›(p, F) - Trivial IFR such that ‹true›(p,F) for all p,F.

Defn~ ⋂ : IFR × IFR → IFR - ∀φ₁,φ₂,φ. φ₁ ⋂ φ₂ = φ ⇔ (∀p,F. φ(p, F) ⇔ (φ₁(p, F) ∧ φ₂(p, F)))

Defn~ ⋂ : Δ × Δ → Δ - ∀δ₁,δ₂,δ₃:Δ. δ₁ ⋂ δ₂ = δ₃ ⇔ (∀p,${\vec{F}}$. δ₃(p, ${\vec{F}}$) ⇔ (δ₁(p, ${\vec{F}}$) ∧ δ₂(p, ${\vec{F}}$)))

Defn~ ⋃ : IFR × IFR → IFR - ∀φ₁,φ₂,φ. φ₁ ⋃ φ₂ = φ ⇔ (∀p,F. φ(p, F) ⇔ (φ₁(p, F) ∨ φ₂(p, F)))

Defn~ ⋃ : Δ × Δ → Δ - ∀δ₁,δ₂,δ₃. δ₁ ⋃ δ₂ = δ₃ ⇔ (∀p,${\vec{F}}$. δ₃(p, ${\vec{F}}$) ⇔ (δ₁(p, ${\vec{F}}$) ∨ δ₂(p, ${\vec{F}}$)))

Defn~ ⋃ : FinSet(People) × FinSet(People) → FinSet(People) - ∀P,P₁,P₂. P = P₁ ⋃ P₂ ⇔ (∀p. p ∈ P ⇔ (p ∈ P₁ ∨ p ∈ P₂))

Defn~ \ : IFR × IFR → IFR - ∀φ₁,φ₂,φ. φ₁ \ φ₂ = φ ⇔ (∀p,F. φ(p, F) ⇔ (φ₁(p, F) ∧ ¬φ₂(p, F)))

Defn~ \ : Δ × Δ → Δ - ∀δ₁,δ₂,δ₃. δ₁ \ δ₂ = δ₃ ⇔ (∀p,${\vec{F}}$. δ₃(p, ${\vec{F}}$) ⇔ (δ₁(p, ${\vec{F}}$) ∧ ¬δ₂(p, ${\vec{F}}$)))

Defn~ \ : FinSet(People) × FinSet(People) → FinSet(People) - ∀P,P₁,P₂. P = P₁ \ P₂ ⇔ (∀p. p ∈ P ⇔ (p ∈ P₁ ∧ p ∉ P₂))

Defn~ co : IFR → IFR - ∀φ₁,φ₂. co(φ₁) = φ₂ ⇔ (∀p,F. φ₂(p, F) ⇔ ¬φ₁(p, F))

Defn Disjoint : IFR × IFR → 𝔹 - ∀φ,ψ. Disjoint(φ, ψ) ⇔ (∀p,F. ¬φ(p, F) ∨ ¬ψ(p, F))

Defn Disjoint : Δ × Δ → 𝔹 - ∀δ₁,δ₂. Disjoint(δ₁, δ₂) ⇔ (∀p,${\vec{F}}$. ¬δ₁(p, ${\vec{F}}$) ∨ ¬δ₂(p, ${\vec{F}}$))

Defn Disjoint : FinSet(People) × FinSet(People) → 𝔹 - ∀P₁,P₂. Disjoint(P₁, P₂) ⇔ (∀p. p ∉ P₁ ∨ p ∉ P₂)

Defn Disjoint₃ : FinSet(People) × FinSet(People) × FinSet(People) → 𝔹 - ∀P₁,P₂,P₃. Disjoint₃(P₁, P₂, P₃) ⇔ (Disjoint(P₁, P₂) ∧ Disjoint(P₁, P₃) ∧ Disjoint(P₂, P₃))

Defn IFRDisjoint₃ : IFR × IFR × IFR → 𝔹 - ∀φ₁,φ₂,φ₃. IFRDisjoint₃(φ₁, φ₂, φ₃) ⇔ (Disjoint(φ₁, φ₂) ∧ Disjoint(φ₁, φ₃) ∧ Disjoint(φ₂, φ₃))

Defn Disjoint₄ : FinSet(People) × FinSet(People) × FinSet(People) × FinSet(People) → 𝔹 - ∀P₁,P₂,P₃,P₄. Disjoint₄(P₁, P₂, P₃, P₄) ⇔ (Disjoint(P₁, P₂) ∧ Disjoint(P₁, P₃) ∧ Disjoint(P₁, P₄) ∧ Disjoint(P₂, P₃) ∧ Disjoint(P₂, P₄) ∧ Disjoint(P₃, P₄))

Defn Disjoint₄ : Δ × Δ × Δ × Δ → 𝔹 - ∀δ₁,δ₂,δ₃,δ₄. Disjoint₄(δ₁, δ₂, δ₃, δ₄) ⇔ (Disjoint(δ₁, δ₂) ∧ Disjoint(δ₁, δ₃) ∧ Disjoint(δ₁, δ₄) ∧ Disjoint(δ₂, δ₃) ∧ Disjoint(δ₂, δ₄) ∧ Disjoint(δ₃, δ₄))

Defn ⊆ : IFR × IFR → 𝔹 - ∀φ,ψ. φ ⊆ ψ ⇔ (∀p,F. φ(p, F) ⇒ ψ(p, F))

Defn ⊆ : Δ × Δ → 𝔹 - ∀δ₁,δ₂. δ₁ ⊆ δ₂ ⇔ (∀p,${\vec{F}}$. δ₁(p, ${\vec{F}}$) ⇒ δ₂(p, ${\vec{F}}$))

Defn ⊆ : FinSet(People) × FinSet(People) → 𝔹 - ∀P₁,P₂. P₁ ⊆ P₂ ⇔ (∀p. p ∈ P₁ ⇒ p ∈ P₂)

Defn DisjointUnion : Δ × Δ × Δ → 𝔹 - ∀δ,δ₁,δ₂. DisjointUnion(δ, δ₁, δ₂) ⇔ (δ = δ₁ ⋃ δ₂ ∧ Disjoint(δ₁, δ₂))

Defn DisjointUnion : IFR × IFR × IFR → 𝔹 - ∀φ,φ₁,φ₂. DisjointUnion(φ, φ₁, φ₂) ⇔ (φ = φ₁ ⋃ φ₂ ∧ Disjoint(φ₁, φ₂))

Defn DisjointUnion₃ : IFR × IFR × IFR × IFR → 𝔹 - ∀φ,φ₁,φ₂,φ₃. DisjointUnion₃(φ, φ₁, φ₂, φ₃) ⇔ (φ = (φ₁ ⋃ φ₂) ⋃ φ₃ ∧ IFRDisjoint₃(φ₁, φ₂, φ₃))

✓Lemma 1: ∀φ,ψ. φ ⋂ ψ = ψ ⋂ φ

✓Lemma 2: ∀φ₁,φ₂,φ. φ₁ ⊆ φ ∧ φ₂ ⊆ φ ⇒ φ₁ ⋃ φ₂ ⊆ φ

✓Lemma 3: ∀φ,ψ. φ ⊆ ψ ⇒ co(ψ) ⊆ co(φ)

✓Lemma 4: ∀φ₁,φ₂,ψ. φ₁ ⊆ φ₂ ⇒ ψ \ φ₂ ⊆ ψ \ φ₁

✓Lemma 5: ∀φ,ψ. φ \ ψ ⊆ φ

✓Lemma 6: ∀φ₁,φ₂. φ₁ ⋂ φ₂ ⊆ φ₁ ∧ φ₁ ⋂ φ₂ ⊆ φ₂

✓Lemma 7: ∀φ,ψ. φ = ψ ⇔ (φ ⊆ ψ ∧ ψ ⊆ φ)

✓Lemma 8: ∀φ,ψ₁,ψ₂. φ ⋂ (ψ₁ ⋃ ψ₂) = (φ ⋂ ψ₁) ⋃ (φ ⋂ ψ₂)

✓Lemma 9: ∀φ,ψ₁,ψ₂. φ ⋂ (ψ₁ ⋃ ψ₂) ⊆ (φ ⋂ ψ₁) ⋃ (φ ⋂ ψ₂)

✓Lemma 10: ∀φ,ψ₁,ψ₂. (φ ⋂ ψ₁) ⋃ (φ ⋂ ψ₂) ⊆ φ ⋂ (ψ₁ ⋃ ψ₂)

✓Lemma 11: ∀φ,ψ₁,ψ₂. φ ⋂ ψ₁ ⊆ φ ⋂ (ψ₁ ⋃ ψ₂)

✓Lemma 12: ∀φ,ψ₁,ψ₂. φ ⋂ ψ₂ ⊆ φ ⋂ (ψ₁ ⋃ ψ₂)

✓Lemma 13: ∀φ,ψ,ψ₁,ψ₂. DisjointUnion(ψ, ψ₁, ψ₂) ⇒ DisjointUnion(ψ ⋂ φ, ψ₁ ⋂ φ, ψ₂ ⋂ φ)

✓Lemma 14: ∀φ,ψ,ψ₁,ψ₂. ψ = ψ₁ ⋃ ψ₂ ⇒ ψ ⋂ φ = (ψ₁ ⋂ φ) ⋃ (ψ₂ ⋂ φ)

✓Lemma 15: ∀φ,ψ₁,ψ₂. Disjoint(ψ₁, ψ₂) ⇒ Disjoint(ψ₁ ⋂ φ, ψ₂ ⋂ φ)

✓Lemma 16: ∀δ₁,δ₂. δ₁ ⋂ δ₂ = δ₂ ⋂ δ₁

✓Lemma 17: ∀δ₁,δ₂. δ₁ = δ₂ ⇔ (δ₁ ⊆ δ₂ ∧ δ₂ ⊆ δ₁)

✓Lemma 18: ∀δ₁,δ₂. δ₁ \ δ₂ ⊆ δ₁

✓Lemma 19: ∀δ₁,δ₂,δ₃,δ₄. δ₁ ⊆ δ₂ ∧ δ₃ ⊆ δ₄ ∧ Disjoint(δ₂, δ₄) ⇒ Disjoint(δ₁, δ₃)

✓Lemma 20: ∀P₁,P₂. P₁ = P₂ ⇔ (P₁ ⊆ P₂ ∧ P₂ ⊆ P₁)

✓Lemma 21: ∀P. (∀p. p ∉ P) ⇒ P = ∅

✓Lemma 22: ∀P. P ⋃ ∅ = P

✓Lemma 23: ∀P₁,P₂. P₁ ⋃ P₂ = P₂ ⋃ P₁

✓Lemma 24: ∀P₁,P₂. P₁ \ P₂ ⊆ P₁

✓Lemma 25: ∀P₁,P₂. Disjoint(P₁, P₂) ⇔ Disjoint(P₂, P₁)

✓Lemma 26: ∀P₁,P₂,P. Disjoint(P₁, P) ∧ Disjoint(P₂, P) ⇒ Disjoint(P₁ ⋃ P₂, P)

Lemma 1: ∀P,P₁,P₂,P₃,P₄,Q. P ⊆ ((P₁ ⋃ P₂) ⋃ P₃) ⋃ P₄ ∧ P₁ ⊆ Q ∧ P₂ ⊆ Q ∧ P₃ ⊆ Q ∧ P₄ ⊆ Q ⇒ P ⊆ Q

✓Lemma 27: ∀φ,ψ₁,ψ₂. pair_{IFR}(ψ₁ ⋃ ψ₂, φ) = pair_{IFR}(ψ₁, φ) ⋃ pair_{IFR}(ψ₂, φ)

✓Lemma 28: ∀φ,ψ₁,ψ₂. pair_{IFR}(ψ₁ ⋃ ψ₂, φ) ⊆ pair_{IFR}(ψ₁, φ) ⋃ pair_{IFR}(ψ₂, φ)

✓Lemma 29: ∀φ,ψ₁,ψ₂. pair_{IFR}(ψ₁, φ) ⋃ pair_{IFR}(ψ₂, φ) ⊆ pair_{IFR}(ψ₁ ⋃ ψ₂, φ)

✓Lemma 30: ∀φ₁,φ₂,ψ₁,ψ₂. Disjoint(φ₁, φ₂) ⇒ Disjoint(pair_{IFR}(ψ₁, φ₁), pair_{IFR}(ψ₂, φ₂))

✓Lemma 31: ∀φ₁,φ₂,ψ₁,ψ₂. Disjoint(φ₁, φ₂) ⇒ Disjoint(pair_{IFR}(φ₁, ψ₁), pair_{IFR}(φ₂, ψ₂))

✓Lemma 32: ∀ψ₁,ψ₂,φ₁,φ₂. ψ₁ ⊆ φ₁ ∧ ψ₂ ⊆ φ₂ ⇒ pair_{IFR}(ψ₁, ψ₂) ⊆ pair_{IFR}(φ₁, φ₂)

✓Lemma 33: ∀φ,φ₁,φ₂,ψ. DisjointUnion(φ, φ₁, φ₂) ⇒ DisjointUnion(pair_{IFR}(φ, ψ), pair_{IFR}(φ₁, ψ), pair_{IFR}(φ₂, ψ))

Lemma 2: ∀φ,φ₁,φ₂,ψ. φ = φ₁ ⋃ φ₂ ⇒ pair_{IFR}(φ, ψ) = pair_{IFR}(φ₁, ψ) ⋃ pair_{IFR}(φ₂, ψ)

Assume the hypothesis. WTS ‹φ,ψ› = ‹φ₁,ψ› ⋃ ‹φ₂,ψ›.

To show that the set corresponding to the right side is a subset of the set corresponding to the left, assume ‹p,F₁,F₂› is in the right set. Then (1) ‹p,F₂› is in (the set corresponding to) ψ and (2) either ‹p,F₁› is in φ₁ or ‹p,F₁› is in φ₂ (or both). (2) is equivalent to (3) ‹p,F₁› ∈ ‹φ₁ ⋃ φ₂, ψ›. From (1) and (3) it follows that ‹p,F₁,F₂› ∈ ‹φ₁ ⋃ φ₂,ψ›.

To show that the set corresponding to the left side is a subset of the set corresponding to the right, assume ‹p,F₁,F₂› is in the left set. Then ‹p,F₂› ∈ ψ and (‹p,F₁› ∈ φ₁ or ‹p, F₁› ∈ φ₂), which is logically equivalent to (‹p,F₂› ∈ ψ ‹p,F₁› ∈ φ₁) or (‹p,F₂› ∈ ψ and ‹p, F₁› ∈ φ₂), which is equivalent to ‹p,F₁,F₂› ∈ ‹φ₁,ψ› or ‹p,F₁,F₂› ∈ ‹φ₂,ψ›.

To show that the set corresponding to the right side is a subset of the set corresponding to the left, assume ‹p,F₁,F₂› is in the right set. Then (1) ‹p,F₂› is in (the set corresponding to) ψ and (2) either ‹p,F₁› is in φ₁ or ‹p,F₁› is in φ₂ (or both). (2) is equivalent to (3) ‹p,F₁› ∈ ‹φ₁ ⋃ φ₂, ψ›. From (1) and (3) it follows that ‹p,F₁,F₂› ∈ ‹φ₁ ⋃ φ₂,ψ›.

To show that the set corresponding to the left side is a subset of the set corresponding to the right, assume ‹p,F₁,F₂› is in the left set. Then ‹p,F₂› ∈ ψ and (‹p,F₁› ∈ φ₁ or ‹p, F₁› ∈ φ₂), which is logically equivalent to (‹p,F₂› ∈ ψ ‹p,F₁› ∈ φ₁) or (‹p,F₂› ∈ ψ and ‹p, F₁› ∈ φ₂), which is equivalent to ‹p,F₁,F₂› ∈ ‹φ₁,ψ› or ‹p,F₁,F₂› ∈ ‹φ₂,ψ›.

Claim 1: ∀φ₁,φ₂,ψ. Disjoint(φ₁, φ₂) ⇒ Disjoint(pair_{IFR}(φ₁, ψ), pair_{IFR}(φ₂, ψ))

Claim 2: ∀φ,φ₁,φ₂,ψ. DisjointUnion(φ, φ₁, φ₂) ⇒ DisjointUnion(pair_{IFR}(ψ, φ), pair_{IFR}(ψ, φ₁), pair_{IFR}(ψ, φ₂))

Claim 3: ∀φ,p,${\vec{F}}$. p ∈ peopleIn_{Δ}(pair_{IFR}(φ, ‹true›), ${\vec{F}}$) ∨ p ∈ peopleIn_{Δ}(pair_{IFR}(co(φ), ‹true›), ${\vec{F}}$)

Claim 4: ∀φ,ψ,p,${\vec{F}}$. p ∈ peopleIn_{Δ}(pair_{IFR}(ψ, ‹true›), ${\vec{F}}$) ⇒ (p ∈ peopleIn_{Δ}(pair_{IFR}(ψ, φ), ${\vec{F}}$) ∨ p ∈ peopleIn_{Δ}(pair_{IFR}(ψ, co(φ)), ${\vec{F}}$))

‹S› : IFR - The person's death is a suicide.

Defn ‹NS› : IFR := co(‹S›) - The person's death is not a suicide.

Defn~ ‹PAS› : IFR - ‹PAS› ⊆ ‹S› - The person ends their life via physician-assisted suicide

Defn ‹non-PA suicide› : IFR := ‹S› \ ‹PAS› - The person's death is a non-physician-assisted suicide.

Defn~ ‹regrettable PAS› : IFR - ‹regrettable PAS› ⊆ ‹PAS›

A person p is in this IFR if they legally use physician-assisted suicide and if there is some information about p, which was unknown at the time when their application for PAS was approved, that, if it had been known, would have caused a significant proportion (say, 20%) of people who would have supported p’s application to resolutely change their mind. Here "resolutely" means that no further information about p would again change the minds of those 20% of people. A more concise but possibly more vague definition would be to say that at least 20% of p’s supporters, if given "perfect information" about p, would change their minds.

Defn ‹non-regrettable PAS› : IFR := ‹PAS› \ ‹regrettable PAS›

Defn~ ‹PAS with wrong and substandard diagnosis› : IFR - ‹PAS with wrong and substandard diagnosis› ⊆ ‹regrettable PAS›

The person is incorrectly diagnosed with a soon-to-be-terminal condition when their real condition is presently treatable. For this argument, we will only allow PAS for conditions that can be diagnosed with very high accuracy. The other option one could take is to argue that a patient can still benefit from PAS even when they were wrongly diagnosed, provided at least that they had the best medical care available. The point is that, if all available evidence has been taken into account, including the probabilities of misleading and missing evidence, then the individual's decision indirectly expresses their utilities of living with suffering vs dying unnecessarily, and in that case we should use *their* utilities as opposed to an average.

Defn~ ‹coerced suicide› : IFR - ‹coerced suicide› ⊆ ‹S›

The person was actively coerced into suicide (assisted or otherwise) by another person, where "actively coerced" means influenced by some means other than guilt, a sense of duty, etc. This is another logical possibility that is easy to "deal with" in this argument (where "deal with" does not mean prevent - see Assumption 4).

Defn ‹non-coerced suicide› : IFR := ‹S› \ ‹coerced suicide›

Defn~ ‹PAS of wrong person› : IFR - ‹PAS of wrong person› ⊆ ‹regrettable PAS›

This could *theoretically* happen in Oregon, for example, since the prescribed lethal drugs don't need to be taken in the presence of a physician. To simplify this argument, we posit a stricter system in which the prescribed drugs must be taken in the presence of a physician.

Defn ‹coerced PAS› : IFR := ‹coerced suicide› ⋂ ‹PAS›

Definitional Axiom 4: ‹coerced PAS› ⊆ ‹regrettable PAS›

Defn ‹very bad PAS› : IFR := ‹PAS of wrong person› ⋃ (‹coerced PAS› ⋃ ‹PAS with wrong and substandard diagnosis›)

Union of some relations that, when true, constitute major failures of the safeguards of the assisted suicide system.

Defn ‹non-coerced PAS› : IFR := ‹PAS› \ ‹coerced PAS›

Defn ‹very bad non-coerced PAS› : IFR := ‹very bad PAS› ⋂ ‹non-coerced PAS›

Defn ‹regrettable non-coerced PAS› : IFR := ‹regrettable PAS› ⋂ ‹non-coerced PAS›

Defn ‹non-regrettable non-coerced PAS› : IFR := ‹non-regrettable PAS› ⋂ ‹non-coerced PAS›

Defn ‹non-coerced non-PA suicide› : IFR := ‹non-coerced suicide› ⋂ ‹non-PA suicide›

Defn ‹difficult-regrettable PAS› : IFR := ‹regrettable PAS› \ ‹very bad PAS›

The kinds of hypothetical regrettable uses of PAS that are hardest to prevent.

Defn ‹difficult-regrettable non-coerced PAS› : IFR := ‹difficult-regrettable PAS› ⋂ ‹non-coerced PAS›

‹NS & worst-case denial of PAS› : IFR

These are the people who have a genuine self-centered desire for PAS and suffer most from not having the option of PAS. Includes at least each person who has an (eventually) physically and/or mentally painful condition and:

- Will not be physically capable of committing suicide without help, by the time they wish to. E.g. if their medical condition severely restricts their movement. And:

Doesn't know anyone who would be willing to assist them, or does know such a person but is unwilling to put them at risk of prosecution, or is unwilling to break the law.

Or: - Would be physically capable, but
- Can't afford to travel to a place where PAS is legal. And:
- Is unwilling to end their life in a way that disfigures their body (e.g. for consideration of the person who finds them, or their family, or religious reasons).

‹NS & typical-case denial of PAS› : IFR

These are the people who have a genuine self-centered desire for PAS and *don't* suffer greatly from not having the option of PAS (i.e. not in ‹NS & worst-case denial of PAS›), but whose quality of life is still negatively affected.

‹NS & best-case denial of PAS› : IFR

These are the (possibly empty) set of people who have a genuine self-centered desire for PAS but nonetheless *benefit*, overall, from not having the option of PAS.

Defn~ ‹NS & desire for PAS› : IFR - DisjointUnion₃(‹NS & desire for PAS›, ‹NS & worst-case denial of PAS›, ‹NS & typical-case denial of PAS›, ‹NS & best-case denial of PAS›) ∧ ‹NS & desire for PAS› ⊆ ‹NS› - Person wanted PAS but didn't receive it (for any reason).

Defn ‹NS & no desire for PAS› : IFR := ‹NS› \ ‹NS & desire for PAS›

Defn ‹S,S› : Δ := pair_{IFR}(‹S›, ‹S›) - In both futures the person's death is a suicide.

Defn ‹NS,S› : Δ := pair_{IFR}(‹NS›, ‹S›)

Person's death is not a suicide in the first futures, but is in the second future.

Defn ‹S,NS› : Δ := pair_{IFR}(‹S›, ‹NS›)

Person's death is a suicide in the first futures, but not in the second future.

Defn ‹NS,NS› : Δ := pair_{IFR}(‹NS›, ‹NS›) - Person's death is not a suicide in either future.

‹true, NS & worse palliative care› : Δ

A genuine worry for some opponents of PAS is that a PAS policy could lead to decreased funding for palliative care. There are two ways one can address this concern: (1) use data from jurisdictions that have PAS to argue that it hasn't happened there, and postulate that Canada will be the same. (2) take extra, active measures to defend against this happening. I have only touched on (2); see note about this in introduction (starts "Fifth, to justify the assumption...")

Defn ‹S, NS & worse palliative care› : Δ := ‹true, NS & worse palliative care› ⋂ ‹S,NS›

Defn ‹NS, NS & worse palliative care› : Δ := ‹true, NS & worse palliative care› ⋂ ‹NS,NS›

The person's death is not a suicide in the status-quo future or the assisted suicide future, and the quality of palliative care available to them is worse in the assisted suicide future compared to in the status-quo future.

Defn~ ‹S, NS because of incr fear of prosecution› : Δ - ‹S, NS because of incr fear of prosecution› ⊆ ‹S,NS›

There is a logical possibility that a person will choose to end their life via illegal non-physician-assisted suicide if legal PAS is not an option for anyone, but will choose *not* to if legal PAS is an option for some people but not for them. If the difference is because they have an increased fear that whoever helps them to die will be prosecuted, then they would be worse off in the assisted-suicide future (with a new law passed) than they would be in the status-quo future. We will specify that any PAS law that this argument endorses does not lead to increased prosecution for illegal assisted suicide.

‹NS, NS because of incr fear of prosecution› : Δ

Definitional Axiom 5: ‹NS, NS because of incr fear of prosecution› ⊆ ‹NS,NS›

Defn ‹true, NS because of incr fear of prosecution› : Δ := ‹NS, NS because of incr fear of prosecution› ⋃ ‹S, NS because of incr fear of prosecution›

Definitional Axiom 6: Disjoint(‹NS, NS because of incr fear of prosecution›, ‹S, NS because of incr fear of prosecution›)

Defn ‹NS, NS & same-or-better palliative care› : Δ := ‹NS,NS› \ ‹NS, NS & worse palliative care›

Defn ‹S, NS & same-or-better palliative care & no incr fear of prosecution› : Δ := (‹S,NS› \ ‹S, NS because of incr fear of prosecution›) \ ‹S, NS & worse palliative care›

‹NS, NS & same-or-better palliative care & exacerbated guilt› : Δ

A genuine worry for some opponents of assisted suicide is that, with physician-assisted suicide available, we will see a decrease in the quality of life of people who are eligible for it but do not want it, due to, for example, feelings of being a burden on their family and friends. We do not try to design an assisted suicide system that prevents this; instead we acknowledge it and postulate that it is compensated for by the benefits of an assisted suicide system.

‹NS, NS & same-or-better palliative care & no exacerbated guilt› : Δ

Definitional Axiom 7: DisjointUnion(‹NS, NS & same-or-better palliative care›, ‹NS, NS & same-or-better palliative care & no exacerbated guilt›, ‹NS, NS & same-or-better palliative care & exacerbated guilt›)

Defn EasierCases : Δ := (‹true, NS & worse palliative care› ⋃ ‹true, NS because of incr fear of prosecution›) ⋃ (pair_{IFR}(‹true›, ‹very bad PAS›) ⋃ ‹S,S›)

Defn ‹NS, coerced suicide› : Δ := pair_{IFR}(‹NS›, ‹coerced suicide›) \ EasierCases

Defn ‹coerced suicide, NS› : Δ := pair_{IFR}(‹coerced suicide›, ‹NS›) \ EasierCases

Defn ‹NS, non-PA suicide & non-coerced› : Δ := pair_{IFR}(‹NS›, ‹non-PA suicide› ⋂ ‹non-coerced suicide›) \ EasierCases

Defn ‹non-PA suicide & non-coerced, NS› : Δ := pair_{IFR}(‹non-PA suicide› ⋂ ‹non-coerced suicide›, ‹NS›) \ EasierCases

Defn ‹NS, difficult-regrettable PAS› : Δ := pair_{IFR}(‹NS›, ‹difficult-regrettable PAS›) \ EasierCases

Defn ‹NS & typical-case denial of PAS, non-regrettable PAS› : Δ := pair_{IFR}(‹NS & typical-case denial of PAS›, ‹non-regrettable PAS›) \ EasierCases

Defn ‹NS & worst-case denial of PAS, non-regrettable PAS› : Δ := pair_{IFR}(‹NS & worst-case denial of PAS›, ‹non-regrettable PAS›) \ EasierCases

Defn ‹S, NS & no incr fear of prosecution› : Δ := ‹S,NS› \ ‹S, NS because of incr fear of prosecution›

Defn ‹S, NS & worse palliative care & no incr fear of prosecution› : Δ := ‹S, NS & worse palliative care› ⋂ ‹S, NS & no incr fear of prosecution›

Note that, at the current level of detail of this proof, the next two symbols are only used in the semantic descriptions of other symbols - not in any axioms.

𝒟 : Distr(◊FuturePairEC) - A finite, subjective Bayesian probability distribution over ◊FuturePairEC

Exp# : Distr(◊FuturePairEC) × Δ → ℝ^{≥0}

Exp#(D,δ) is the expected value of |peopleIn_{Δ}(δ,rep(E))| when ◊FuturePairEC E is chosen at random from D

Defn~ ‹favourable for status quo› : Δ → 𝔹 - ∀δ. ‹favourable for status quo›(δ) ⇔ 0 ≻_{Δ} ΔsOf(peopleIn_{Δ}(δ, $\vec{F}_{\text{typical}}$), $\vec{F}_{\text{typical}}$)

‹favourable for status quo›(δ) means that δ(·, $\vec{F}_{\text{typical}}$) represents a negative change (or equivalently, δ(·, swap($\vec{F}_{\text{typical}}$)) represents a positive change). This is more easily conveyed by examples: ‹favourable for status quo›(‹NS, coerced suicide›) holds because if ‹NS, coerced suicide›(p, $\vec{F}_{\text{typical}}$) then clearly something gets better for p when moving from the assisted suicide future (the second of the pair) to the status quo future (or worse when moving in the opposite direction).

$\vec{F}_{\text{typical}}$ : ◊FuturePair

A particular future pair whose equivalence class is "typical", in a certain sense, with respect to 𝒟.
At a high level, we assume
^{≥0}, which is a parameter of this argument, such that:

- Extremely unlikely (w.r.t. 𝒟) Δs don't happen to anyone in $\vec{F}_{\text{typical}}$.
- The count of people who δ happens to is roughly the expected value of the corresponding counting random variable of 𝒟. We round up for the Δs that are favourable for the status-quo (SQ) futures, and down for the Δs that are neutral or favourable for the assisted suicide futures.

- |peopleIn
_{Δ}(δ,$\vec{F}_{\text{typical}}$)| = 0 for all δ such that Exp#(δ,𝒟) < ε. - |peopleIn
_{Δ}(δ,$\vec{F}_{\text{typical}}$)| = ceil(x) for all δ such that Exp#(δ,𝒟) = x ≥ ε and ‹favourable for status quo›(δ). - |peopleIn
_{Δ}(δ,$\vec{F}_{\text{typical}}$)| = floor(x) for all δ such that Exp#(δ,𝒟) = x ≥ ε and ¬‹favourable for status quo›(δ).

Defn~ ‹all people› : FinSet(People) - ∀p. p ∈ ‹all people›

Assumption 1: ΔsOf(‹all people›, $\vec{F}_{\text{typical}}$) ≻_{Δ} 0 ⇒ ‹should pass›

Goal: ‹should pass›

Defn~ peopleIn$_{\text{IFR}}^{1,*}$ : IFR → FinSet(People) - ∀φ. ∀p. p ∈ peopleIn$_{\text{IFR}}^{1,*}$(φ) ⇔ φ(p, $\vec{F}_{\text{typical}}$[1])

Defn~ peopleIn$_{\text{IFR}}^{2,*}$ : IFR → FinSet(People) - ∀φ. ∀p. p ∈ peopleIn$_{\text{IFR}}^{2,*}$(φ) ⇔ φ(p, $\vec{F}_{\text{typical}}$[2])

Defn ṗeopleIn_{Δ} : Δ → FinSet(People) - ∀δ. ṗeopleIn_{Δ}(δ) = peopleIn_{Δ}(δ, $\vec{F}_{\text{typical}}$)

Definitional Axiom 8: ∀Y₁,Y₂. Y₁ ≈ Y₂ ⇒ Y₂ ≈ Y₁

Definitional Axiom 9: ∀Y. Y ≈ Y

Definitional Axiom 10: ∀Y₁,Y₂,Y₃. Y₁ ≈ Y₂ ∧ Y₂ ≈ Y₃ ⇒ Y₁ ≈ Y₃

Definitional Axiom 11: ∀Y₁,Y₂. Y₁ ≻_{Δ} Y₂ ⇒ ¬Y₂ ≻_{Δ} Y₁

Definitional Axiom 12: ∀Y. ¬Y ≻_{Δ} Y

Definitional Axiom 13: ∀Y₁,Y₂,Y₃. Y₁ ≻_{Δ} Y₂ ∧ Y₂ ≻_{Δ} Y₃ ⇒ Y₁ ≻_{Δ} Y₃

Definitional Axiom 14: ∀P₁,P₂,${\vec{F}}$. Disjoint(P₁, P₂) ∧ ΔsOf(P₁, ${\vec{F}}$) ≈ 0 ∧ ΔsOf(P₂, ${\vec{F}}$) ≈ 0 ⇒ ΔsOf(P₁ ⋃ P₂, ${\vec{F}}$) ≈ 0

Definitional Axiom 15: ∀P₁,P₂. ∀${\vec{F}}$. Disjoint(P₁, P₂) ∧ ΔsOf(P₁, ${\vec{F}}$) ≿_{Δ} 0 ∧ ΔsOf(P₂, ${\vec{F}}$) ≿_{Δ} 0 ⇒ ΔsOf(P₁ ⋃ P₂, ${\vec{F}}$) ≿_{Δ} 0

Definitional Axiom 16: ∀P₁,P₂. ∀${\vec{F}}$. Disjoint(P₁, P₂) ∧ ΔsOf(P₁, ${\vec{F}}$) ≿_{Δ} 0 ∧ ΔsOf(P₂, ${\vec{F}}$) ≻_{Δ} 0 ⇒ ΔsOf(P₁ ⋃ P₂, ${\vec{F}}$) ≻_{Δ} 0

Definitional Axiom 17: ∀P₁,P₂. ∀${\vec{F}}$. Disjoint(P₁, P₂) ∧ ΔsOf(P₁, ${\vec{F}}$) ≈ ΔsOf(P₂, swap(${\vec{F}}$)) ⇒ ΔsOf(P₁ ⋃ P₂, ${\vec{F}}$) ≈ 0

Definitional Axiom 18: ∀P₁,P₂. ∀${\vec{F}}$. Disjoint(P₁, P₂) ∧ ΔsOf(P₁, ${\vec{F}}$) ≿_{Δ} ΔsOf(P₂, swap(${\vec{F}}$)) ∧ ΔsOf(P₁, ${\vec{F}}$) ≿_{Δ} 0 ⇒ ΔsOf(P₁ ⋃ P₂, ${\vec{F}}$) ≿_{Δ} 0

Definitional Axiom 19: ∀P₁,P₂. ∀${\vec{F}}$. Disjoint(P₁, P₂) ∧ ΔsOf(P₁, ${\vec{F}}$) ≻_{Δ} ΔsOf(P₂, swap(${\vec{F}}$)) ∧ ΔsOf(P₁, ${\vec{F}}$) ≿_{Δ} 0 ⇒ ΔsOf(P₁ ⋃ P₂, ${\vec{F}}$) ≻_{Δ} 0

Assumption 2: For each δ in {pair_{IFR}(‹true›, ‹PAS of wrong person›), ‹S, NS because of incr fear of prosecution›, pair_{IFR}(‹true›, ‹PAS with wrong and substandard diagnosis›), ‹NS, NS & worse palliative care›}: ṗeopleIn_{Δ}(δ) = ∅

The proposed safeguards for the implementation make these Δs so unlikely that they do not apply to anyone in $\vec{F}_{\text{typical}}$.

Assumption 3: For each δ in {‹S,S›, pair_{IFR}(‹NS & no desire for PAS›, ‹non-regrettable PAS›), ‹NS, NS & same-or-better palliative care & no exacerbated guilt›, ‹S, NS & same-or-better palliative care & no incr fear of prosecution›}: ΔsOf(ṗeopleIn_{Δ}(δ), $\vec{F}_{\text{typical}}$) ≿_{Δ} 0

The quality of life for these people is not significantly different between the two futures. We make the superficially-weaker assumption that their quality of life is not worse in the assisted-suicide future than it is in the status quo future.

Assumption 4: ΔsOf(ṗeopleIn_{Δ}(‹NS, coerced suicide›), $\vec{F}_{\text{typical}}$) ≈ ΔsOf(ṗeopleIn_{Δ}(‹coerced suicide, NS›), swap($\vec{F}_{\text{typical}}$))

Let P₁ be the people whose death is not a suicide in SQF and whose death is a coerced suicide in ASF. Let P₂ be the people whose death is a coerced suicide in SQF and not a suicide in ASF. Note that P₁ and P₂ are disjoint. When it comes to comparing SQF and ASF, the cost for the people in P₁ of moving from SQF to ASF is approximately equal to the cost for the people in P₂ of moving from ASF to SQF. In a more-detailed version of this argument, this assumption would be derived (by a single axiom) from two simpler ones:

- The size of P₁ is approximately equal to the size of P₂. This requires that passing of PAS legislation eliminates as many coerced suicides as it introduces (P₂ are eliminated, P₁ are introduced).
- You believe coerced suicide in ASF and coerced suicide in SQF are equally-bad fates.

Assumption 5: ΔsOf(ṗeopleIn_{Δ}(‹NS & typical-case denial of PAS, non-regrettable PAS›), $\vec{F}_{\text{typical}}$) ≿_{Δ} ΔsOf(ṗeopleIn_{Δ}(‹NS, NS & same-or-better palliative care & exacerbated guilt›), swap($\vec{F}_{\text{typical}}$))

Let P₁ be the people who satisfy ‹NS & typical-case denial of PAS› in SQF and ‹non-regrettable PAS› in SQF. Let P₂ be the people whose death is not a suicide in SQF who, in ASF, still do not die by suicide, and have equally-good access to palliative care, but also are tormented with guilt for not choosing PAS (and there was no such guilt for them SQF, because PAS was illegal). When it comes to comparing SQF and ASF, the cost to the people in P₁ of moving from ASF to SQF is at least as great as the cost to the people in P₂ of moving from SQF to ASF. In a more-detailed version of this argument, this assumption would be derived (by a single axiom) from two simpler ones:

- For some fraction n/m ≥ 1 (say, n=10 and m=1), (n/m)|P₁| < |P₂|. That is, you grant that P₂ is larger than P₁, but you posit that it is no more than (n/m) times larger.
- You believe it is at least as bad to move m people who satisfy ‹NS & typical-case denial of PAS, non-regrettable PAS› from ASF to SQF as it is to move n people who satisfy ‹NS, NS & same-or-better palliative care & exacerbated guilt› from SQF to ASF.

Assumption 6: ΔsOf(ṗeopleIn_{Δ}(‹NS & worst-case denial of PAS, non-regrettable PAS›), $\vec{F}_{\text{typical}}$) ≻_{Δ} ΔsOf(ṗeopleIn_{Δ}(‹NS, difficult-regrettable PAS›), swap($\vec{F}_{\text{typical}}$))

This assumption is similar to Assumption 5 in form (except for using ≻_{Δ} instead of ≿_{Δ}), but it concerns more-severe kinds of changes. This assumption is often the main *explicit* disagreement between non-religious proponents of PAS and (self-reported) non-religious opponents of PAS. So please read the informal description of Assumption 5 first. Like Assumption 5, in a more-detailed version of this argument, this assumption would be derived (by a single axiom) from two simpler ones, which are of the form in Assumption 5 *except* that one might argue the fraction n/m is < 1 *if* the safeguards of the PAS system are very good.

Defn People₁ : FinSet(People) := ṗeopleIn_{Δ}(pair_{IFR}(‹true›, ‹PAS with wrong and substandard diagnosis›)) ⋃ (ṗeopleIn_{Δ}(pair_{IFR}(‹true›, ‹PAS of wrong person›)) ⋃ (ṗeopleIn_{Δ}(‹NS, NS & worse palliative care›) ⋃ ṗeopleIn_{Δ}(‹S, NS because of incr fear of prosecution›)))

Defn People₂ : FinSet(People) := ((ṗeopleIn_{Δ}(‹S,S›) ⋃ ṗeopleIn_{Δ}(‹S, NS & same-or-better palliative care & no incr fear of prosecution›)) ⋃ ṗeopleIn_{Δ}(‹NS, NS & same-or-better palliative care & no exacerbated guilt›)) ⋃ ṗeopleIn_{Δ}(pair_{IFR}(‹NS & no desire for PAS›, ‹non-regrettable PAS›))

Defn People₃ : FinSet(People) := ṗeopleIn_{Δ}(‹NS, coerced suicide›) ⋃ ṗeopleIn_{Δ}(‹coerced suicide, NS›)

Defn People₄ : FinSet(People) := ṗeopleIn_{Δ}(‹NS, NS & same-or-better palliative care & exacerbated guilt›) ⋃ ṗeopleIn_{Δ}(‹NS & typical-case denial of PAS, non-regrettable PAS›)

Defn People₅ : FinSet(People) := ṗeopleIn_{Δ}(‹NS, difficult-regrettable PAS›) ⋃ ṗeopleIn_{Δ}(‹NS & worst-case denial of PAS, non-regrettable PAS›)

Assertion 1: ΔsOf(ṗeopleIn_{Δ}(‹NS & typical-case denial of PAS, non-regrettable PAS›), $\vec{F}_{\text{typical}}$) ≻_{Δ} 0

From ‹NS & typical-case denial of PAS› to ‹non-regrettable PAS› is a *good change*. Some self-reported religious people, who are not in the intended audience of this argument, will reject this assumption.

Assertion 2: ΔsOf(ṗeopleIn_{Δ}(‹NS & worst-case denial of PAS, non-regrettable PAS›), $\vec{F}_{\text{typical}}$) ≻_{Δ} 0

From ‹NS & worst-case denial of PAS› to ‹non-regrettable PAS› is a *good change*. Some religious people, who are not in the intended audience of this argument, will reject this assumption.

Simplifying Assumption 1: peopleIn$_{\text{IFR}}^{1,*}$(‹NS & desire for PAS›) = peopleIn$_{\text{IFR}}^{1,*}$(‹NS & typical-case denial of PAS›) ⋃ peopleIn$_{\text{IFR}}^{1,*}$(‹NS & worst-case denial of PAS›)

From the proposition accompanying the introduction of ‹NS & desire for PAS›, this is equivalent to the assumption that there are no instances of ‹NS & best-case denial of PAS› in SQF. This assumption is disputable. It would be better to include the hypothetical set of people who satisfy ‹NS & best-case denial of PAS› in Assumption 5 or Assumption 6, which would amount to a strengthening of those assumptions.

✓Lemma 70: ΔsOf(‹all people›, $\vec{F}_{\text{typical}}$) ≻_{Δ} 0

Defn People_{1-5} : FinSet(People) := (((People₁ ⋃ People₂) ⋃ People₃) ⋃ People₄) ⋃ People₅

✓Lemma 71: ‹all people› = People_{1-5}

Lemma 4: ‹all people› ⊆ ((ṗeopleIn_{Δ}(‹S,S›) ⋃ ṗeopleIn_{Δ}(‹S,NS›)) ⋃ ṗeopleIn_{Δ}(‹NS,S›)) ⋃ ṗeopleIn_{Δ}(‹NS,NS›)

The four sets on the right hand side are a partition of ‹all people› since ‹NS› is the complement of ‹S›.

Lemma 5: ṗeopleIn_{Δ}(‹S,S›) ⊆ People_{1-5}

Follows easily from basic set reasoning, Claim 22, and the definition of People_{1-5}.

Claim 22: ṗeopleIn_{Δ}(‹S,S›) ⊆ People₂

Lemma 6: ṗeopleIn_{Δ}(‹S,NS›) ⊆ People_{1-5}

Follows easily from basic set reasoning, Claim 23, and the definition of People_{1-5}.

Claim 23: ṗeopleIn_{Δ}(‹S,NS›) ⊆ People₁ ⋃ People₂

Claim 24: ṗeopleIn_{Δ}(‹NS,S›) ⊆ People_{1-5}

Lemma 7: ṗeopleIn_{Δ}(‹NS,NS›) ⊆ People_{1-5}

Follows easily from basic set reasoning, Claim 25, and the definition of People_{1-5}.

Claim 25: ṗeopleIn_{Δ}(‹NS,NS›) ⊆ (People₁ ⋃ People₂) ⋃ People₄

✓Lemma 72: People_{1-5} ⊆ ‹all people›

✓Lemma 73: ‹all people› ⊆ People_{1-5}

✓Lemma 74: ΔsOf(People_{1-5}, $\vec{F}_{\text{typical}}$) ≻_{Δ} 0

✓Lemma 75: ΔsOf(People₅, $\vec{F}_{\text{typical}}$) ≻_{Δ} 0

✓Lemma 76: ΔsOf(((People₁ ⋃ People₂) ⋃ People₃) ⋃ People₄, $\vec{F}_{\text{typical}}$) ≿_{Δ} 0

✓Lemma 77: ΔsOf(People₄, $\vec{F}_{\text{typical}}$) ≿_{Δ} 0

Claim 26: Disjoint(ṗeopleIn_{Δ}(‹NS, NS & same-or-better palliative care & exacerbated guilt›), ṗeopleIn_{Δ}(‹NS & typical-case denial of PAS, non-regrettable PAS›))

✓Lemma 78: ΔsOf((People₁ ⋃ People₂) ⋃ People₃, $\vec{F}_{\text{typical}}$) ≈ 0

✓Lemma 79: ΔsOf(People₃, $\vec{F}_{\text{typical}}$) ≈ 0

✓Lemma 80: ΔsOf(People₁ ⋃ People₂, $\vec{F}_{\text{typical}}$) ≈ 0

✓Lemma 81: ΔsOf(People₁, $\vec{F}_{\text{typical}}$) ≈ 0

✓Lemma 82: ΔsOf(People₂, $\vec{F}_{\text{typical}}$) ≈ 0