On this topic:

1) objet petit a = “less than zero” = “less than the empty set”

2) Qualia is uncertainty, uncertainty is conditional counting

3) Virtuality is what is left behind by conditional subtraction

4) encapsulation is relativity

5) relativity?

6) Conditional Counting of Qualia

7) Why N-1 in standard deviation?

***

natural numbers are usually shown in set theory as follows:

`0 = {}`

1 = {0} = {{}}

2 = {0,1} = {0,{0}} = {{},{{}}}

3 = {0,1,2} = {0,{0},{0,{0}}} = {{},{{}},{{},{{}}}}

if we introduce objet petit a, a new representation becomes possible:

— we designate objet a by ‘·’ that indicates the possibility of a new element

(middle dot is sometimes used in a function f(·) indicating that its parameter is left undetermined. when you solve a puzzle like a nonogram, sometimes you dot some squares to mark something in your mind. it is the same dot.)

— we remove the outmost brackets to indicate “non-all”ness of the field.

— we remove the commas because “the field is open”.

the new notation allows us to represent new numbers that are “less than” usual numbers:

`less than 0 = 0' = ·`

less than 1 = 1' = · {·}

less than 2 = 2' = · {·} { · {·} }

less than 3 = 3' = · {·} { · {·} } { · {·} { · {·} } }

...

(side note: alain badiou thought ∅ to be the basic symbol, and called it “void”. however ∅ is not basic, and moreover, ∅ is not void. it is just another pair of brackets ∅ = {} and “void” is not these empty brackets themselves, but what is *not* inside them, namely, the dot. dot also designates objet petit a, the “less than nothing” or “less than ∅” indicated by slavoj zizek’s book with this title. “death drive” is only cognizable when we see the dot, which is normally not there.)

based on this new notation, any usual number can be obtained by “totalizing” its “lesser” number.

we “totalize” a lesser number by bracketing it inside a closed set.

in a closed set,

— possibilities (‘·’s) disappear, they are automatically cancelled out.

— instead of them, commas (‘,’s) appear to “place” the elements by separating the neighboring “counter-brackets” (‘}{‘ becomes ‘},{‘).

for example, “less than 0” becomes 0 as follows

`{less than 0} = {0'} = {·} = {} = 0`

(here, brackets as the “empty signifier” cancel out the objet petit a (‘·’))

similarly,

`1 = {less than 1} = {1'} = {· {·}} = {{}}`

2 = {less than 2} = {2'} = {· {·} { · {·} }} = {{},{{}}}

3 = {less than 3} = {3'} = {· {·} { · {·} } { · {·} { · {·} } }} =

{{},{{}},{{},{{}}}}

...

(side note: this kind of representation {n’} is similar to ‘pointers’ in programming. a pointer is “less than” a variable, it is used to reach variables indirectly, “by reference”. e.g. if i is a variable and pi is its pointer, i == *pi)

(side note: alain badiou defined the “indecision” of an “event” as a “self-containing set”. we define it as the indecision of independent existence expressed by the conditional number 1′ = · {·}, an “open set” that consists of the containment of the “closed set” 1 = {{}}.)

similarly, we can “open” a set by “dotting” it. opening/dotting “detotalizes”, if there’s an outer dot, inner dots also appear, the outer dot “propogates” the existing dots to cover all of the brackets. to obtain any lesser number, we can “count from the dot”:

0' = ·

1' = · 0 = ·{} = ·{·}

2' = · 0 1 = · {} {{}} = · { · } { · {·} }

3' = · 0 1 2 = · {} {{}} {{},{{}}} = · { · } { · { · } } { · { · } { · { · } } }

...

(side note: “dotting of brackets” indicate the “opening of a field”, a temporary suspension of structure, a “revolutionary moment” that will allow structural changes. in this sense, dot is also the vanishing mediator of a structural change. a simple example: when you hold your finger on your iphone, the icons begin to shake, in this special timeframe, you can move these icons, also x’s appear to delete them. “brackets” of the interface has been “dotted”. you press home button to re-totalize and undot these brackets. dots only appear in a temporary “special world” from which we will soon return to our “ordinary world” as captured by the hero’s journey. in this sense, deleting an iphone application is basically analogous to “going to mountains, killing the dragon and returning home”. the catch is, it is also possible to reverse this journey. as opposed to a normal hero that first dots and then undots to return to ordinary structure, an inverted hero would normally live in “dotted brackets”(special world), and she’d only temporarily undot to enter an “ordinary world” and afterwards, re-dot herself again, propagating her “dots” to the ordinary structure she had confronted, making it a “special world”. organizational phenomena such as “flexible working” etc. point towards a world becoming more and more dotted and special, where ordinary heroes suffer and only inverted heroes can survive by themselves. in such a world, even if ordinary heroes exist, they have to rely on other inverted heroes and their value is merely imaginary. in today’s “dotted” world, ordinary heroes only fully exist in hollywood and other pseudo-hollywoods.)

(side note: twitter is an example dotted bracket. its bracket is the stream of tweets on the page, and its “outer dot” is the small message “N new tweets” that indicates the elements-to-come. red numbered indications in facebook/gmail etc are also “dots” that open the field. the fact that they are static, they don’t animate, shows that they are not temporary. the online world has become dotted normally. we don’t want pop-ups and animating messages, because they express a delusion of “ordinarily undotted brackets” on the internet. no. there are no “ordinary world” online, so the dots have to be static, they are here to stay. every online interface has to represent the element-to-come by a certain dot.)

(side note: if we compare them as “value forms”, “undotted brackets” is measured by “gold”, and “dotted brackets” are measured by “enthusiasm”. “gold” here indicates that which can be stored in a static space, a chest, a harddisk, etc. “enthusiasm” indicates that which cannot be stored and only belongs to the present time. present time as such is timeless, when it is cut from the time frame, yet it is eternal, it captures infinity. as wittgenstein said: “if by eternity is understood not endless temporal duration but timelessness, then he lives eternally who lives in the present”. the eternal timelessness described here is the “dotted brackets” where only “enthusiasm” counts and “gold” is meaningless.)

(side note: Hegel called brackets of a closed set, “Grenze” or “limit”; and brackets of an open set, “Schranke” or “frontier”. Being and Event p.162 / “The operator of qualitative infinity (of the dots) is passing-beyond. The operator of quantitative infinity (of the brackets) is duplication.” Being and Event p.168)

“undotting” or “totalization” with brackets is what distinguishes Imaginary from Symbolic.

in other words, Symbolic is “placed” Imaginary: Imaginary becomes Symbolic when elements are structured by brackets and commas in a closed set, and this “placing” erases the objet petit a.

division of the subject into Imaginary/Symbolic can be discerned in the formula of incrementing usual numbers:

`n + 1 = n union {n}`

thus, a usual number increments itself by “uniting” with its symbolic mandate (the number itself in brackets).

Real (represented by the dots) is invisible in this formula, because it only becomes visible when the field is “opened” and possibilities are taken into account by ‘·’s.

for “lesser” numbers, incrementing formula becomes as follows:

`less than n+1 = (n+1)' = n' {n'} = n' n`

example:

`less than 1 = 1' = · {·}`

less than 2 = 2' = 1' {1'} = · {·} { · {·} }

in the lesser case,

— there is no operation for “uniting”, and

— the symbol(bracketing) of the lesser number gives the usual number.

as a result,

— usual numbers increment themselves by integrating their symbolic mandate (alienation?)

— lesser numbers increment themselves by “confronting” their usual number (separation? reconciliation?)

***

statistical/probabilistic questions concern the 1′ which in turn concerns 0′ because 1′ is obtained by confronting zero with its lesser number, in other words, by “dotting the zero”:

`1' = · 0 = 0' 0 = · {} = · { · }`

inner dot and the outer dot here says “to be or not to be”.

type I and type II errors in statistics concerns their fear of these two dots. type I is the fear of the dot inside brackets, and type II is the fear of the dot outside brackets.

Bayesian Monte Carlo methods allow one to operate between these two dots by weighting/rejecting/resampling etc.

— a weight of a sample is the fraction inner dot/outer dot in terms of probability.

— rejection is “undotting” a sample based on its weight.

— resampling is “undotting” a set of samples based on their weights.

but Bayesian methods require one to “count the dot”, i.e. assume a prior probability.

frequentists do not want to see the dot, so their fear is reflected in type I and type II errors.

subjective Bayesians accept the dot, and assign them probability, so that

`sum of all pr(·) = 1`

beginning from the prior dot to posterior dots that are obtained by “likelihoods”.

lets assume that we are receiving bits of information b1,b2,…

p(b1) = 1' = · { · }

p(b1,b2) = 2' = · { · } { · { · } }

p(b1,b2,b3) = 3' = · { · } { · { · } } { · { · } { · { · } } }

...

here each dot denotes a combination of those bits, such that the joint probability of n consequent bits involve 2^n dots.

then, what is a “likelihood”?

p(b1) = 1' = · { · }

p(b1,b2) = 2' = · { · } { · { · } }

p(b2 | b1) = ?

a likelihood determines how much current dots “propagate” to the next lesser number, so that the final dots have probabilities of particular rations to the initial dots:

p(b1,b2)/p(b1) = p(b2 | b1)

say, if

p(b1=1) = p1

p(b2=1|b1=1) = p2

p(b2=1|b1=0) = p3

==>

p(b1=1,b2=1) = p1 p2

p(b1=1,b2=0) = p1 p3

then it follows that

p(b1) = 1' = · { · } = 1-p1 { p1 }

p(b1,b2) = 2' = · { · } { · { · } }

= (1-p1)(1-p3) { p1(1-p2) } { (1-p1)p3 { p1 p2 } }

in the binary case, we can directly assign probabilities to these dots, because they denote “event happens” and “event does not happen”. this only concerns the problem “to be or not to be”.

in case of other distributions with events consisting of sub-events or with continuous event spaces, dots will have more complex functions.

let’s look at the discrete case.

let c1,c2,.. be categorical variables that take values 1,2,3. in this case, p(c1) consists of 3 binary variables:

p(c1) = p(b1) p(b2|b1) p(b3|b2,b1) = · 0 1 2 = 3' = ·{·}{·{·}}{·{·}{·{·}}}

in our special case, only one of these bits are one, removing some of the dots:

p(b2=1|b1=1) = 0

p(b3=1|b1=1,b2) = 0

p(b3=1|b1,b2=1) = 0

p(b3=1|b1=0,b2=0) = 1

p(c1) = {·}{·{}}{·{}{{}}}

so the dot either appears in the 1st step, 2nd step or the 3rd step. note that we did not erase these dots due to the “totalization”, because they are now “quantified” and they are more than just a dot. we can use a new symbol to represent a quantified dot:

p(c1) = {◊}{◊{}}{◊{}{{}}}

a “quantified dot” (◊) is a dot with an assigned probability. we now correct the summation formula above:

sum of all pr(◊) = 1

now we have a representation for a single categorical variable.

now, we want to put some of the probability outside brackets:

p(c1) = ◊{◊}{◊{}}{◊{}{{}}}

this will allow to increase the number of categories. let’s call it a.

c2 will consist of 3 or 4 categories. we assume b4 is will be the additional category, only appears with a

p(c1) = a{◊}{◊{}}{◊{}{{}}}

p(c1,c2) = p(c1,b4,b5,b6,b7)

p(c1,b4) = a{◊}{◊{}}{◊{}{{}}} {a◊{}{{}}{{}{{}}}}

p(c1,b4,b5) = a{◊}{◊{}}{◊{}{{}}} {a◊{}{{}}{{}{{}}}} {a◊{◊}{◊{}}{◊{}{{}}} {{}{{}}{{}{{}}}}}

p(c1,b4,b5,b6) = a{◊}{◊{}}{◊{}{{}}} {a◊{}{{}}{{}{{}}}} {a◊{◊}{◊{}}{◊{}{{}}} {{}{{}}{{}{{}}}}} {a{◊}{◊{}}{◊{}{{}}} {{}{{}}{{}{{}}}} {{}{{}}{{}{{}}} {{}{{}}{{}{{}}}}}}

p(c1,b4,b5,b6,b7) = a{◊}{◊{}}{◊{}{{}}} {a◊{}{{}}{{}{{}}}} {a◊{◊}{◊{}}{◊{}{{}}} {{}{{}}{{}{{}}}}} {a{◊}{◊{}}{◊{}{{}}} {{}{{}}{{}{{}}}} {{}{{}}{{}{{}}} {{}{{}}{{}{{}}}}}} {a{◊}{◊{}}{◊{}{{}}} {{}{{}}{{}{{}}}} {{}{{}}{{}{{}}} {{}{{}}{{}{{}}}}} {{}{{}}{{}{{}}} {{}{{}}{{}{{}}}} {{}{{}}{{}{{}}} {{}{{}}{{}{{}}}}}}}

as you can see, due to the categorical constraint, though there are many brackets, most probabilities are zero.

there’ll be even more brackets then, because c3 will consist of 3/4/5 categories:

p(c1) = a{◊}{◊{}}{◊{}{{}}}

p(c1,c2) = p(c1,b4,b5,b6,b7) = a{◊}{◊{}}{◊{}{{}}} {a◊{}{{}}{{}{{}}}} {a◊{◊}{◊{}}{◊{}{{}}} {{}{{}}{{}{{}}}}} {a{◊}{◊{}}{◊{}{{}}} {{}{{}}{{}{{}}}} {{}{{}}{{}{{}}} {{}{{}}{{}{{}}}}}} {a{◊}{◊{}}{◊{}{{}}} {{}{{}}{{}{{}}}} {{}{{}}{{}{{}}} {{}{{}}{{}{{}}}}} {{}{{}}{{}{{}}} {{}{{}}{{}{{}}}} {{}{{}}{{}{{}}} {{}{{}}{{}{{}}}}}}}

p(c1,c2,c3) = p(c1,c2,b8,b9,b10,b11,b12) = ...

what if we began with not 3 but 0 categories? (CRP)

p(c1) = a

(assign 1 to cluster a)

p(c1,c2) = p(c1,b1,b2)

p(c1,b1) = a {a◊}

(assign 2 to cluster a)

p(c1,b1,b2) = a {a◊} {a◊ {}}

(assign 2 to cluster 1)

p(c1,c2,c3) = p(c1,c2,b3,b4,b5)

p(c1,c2,b3) = a {a◊} {a◊ {}} {a◊ {}{{}}}

(assign 3 to cluster a)

p(c1,c2,b3,b4) = a {a◊} {a◊ {}} {a◊ {}{{}}} {a◊ {a◊} {a◊ {}} {{}{{}}}}

(assign 3 to cluster 1)

p(c1,c2,b3,b4,b5) = a {a◊} {a◊ {}} {a◊ {}{{}}} {a◊ {a◊} {a◊ {}} {{}{{}}}} {a◊ {a◊} {{}} {{}{{}}} {{} {{}} {{}{{}}}}}

(assign 3 to cluster 2)

p(c1,c2,c3,c4) = p(c1,c2,c3,b6,b7,b8,b9) = ...

result:

p(c1) = a

p(c1,c2) = a {a◊} {a◊ {}}

p(c1,c2,c3) = a {a◊} {a◊ {}} {a◊ {}{{}}} {a◊ {a◊} {a◊ {}} {{}{{}}}} {a◊ {a◊} {{}} {{}{{}}} {{} {{}} {{}{{}}}}}

which numbers are these?

p(c1) = · = 0'

p(c1,c2) = · {·} {· {·}} = 2' = 0' 0 1

p(c1,c2,c3) = · {·} {· {·}} {· {·}{·{·}}} {·{·} {·{·}} {·{·}{·{·}}}} {· {·} {·{·}} {·{·}{·{·}}} {·{·}{·{·}}{·{·}{·{·}}}}} = 5' = 2' 2 3 4

p(c1,c2,c3,c4) = 9' = 5' 5 6 7 8

p(c1,c2,c3,c4,c5) = 14' = 9' 9 10 11 12 13

p(c1,...,cn) = ( n(n+1)/2 - 1 )'

p(c1,...,cn) = · 0 1 ... (nn+n-2)/2

***

what does “confronting one’s number” mean? it means 1′ 1 = 2′, in other words, “not yet 1” and “already 1” together make up a “not yet 2”. these correspond to a prior, a likelihood and a joint probability.

1' 1 = 2'

·{·} {{}} = ·{·}{·{·}}

prior * likelihood = joint probability

p(a) p(b|a) = p(a,b)

here, likelihood does not correspond to 1. likelihood is that which distributes 1′ outwards to 1, and 1 is that which structures this likelihood. likelihood p(b|a) is the “likelihood of a”, it is in terms of the variable that is “not yet 1” and that which it makes into a “not yet 2”.

in its most pure sense, “likelihood” is the name of the operation for incrementing these unusual/lesser/probabilistic numbers.

L(1') = 1' 1 = 2'

L(n') = n' n = (n+1)'

the operation L(n’), by incrementing the number, expands its structure to include a higher dimensional “eventuality”. it adds a new binary variable that can be 0/1. it adds the interface a new switch that can be on/off. it extends the space.

so, from here, how can we reach marginals, posterior, evidence?

marginalization is an operation that reduces dimensionality. it is like a decrementing. it sums up over one of the dimensions.

1' = 2' \ 1

·{·} = ·{·}{·{·}} \ {{}}

if this representation is correct, decrementing in this way only summates over the last variable. how do we marginalize the first variable? it seems that decrementing a dotted number has some kind of directionality: n’ contains n increments since 0′. which of these n increments are you decrementing?

in our case, there are two possibilities, the second increment or the first increment.

1' = 2' \ 1 = a{b}{c{d}} \ {{}} = a+c{b+d}

(decrement the second increment)

1' = 2' \ 1 = a{b}{c{d}} \ {{}} = a+b{c+d}

(decrement the first increment)

decrementing the second increment gives back the “prior”, whereas decrementing the first increment gives the “evidence”. how are these two (1′)s related?

we have to see that in an alternative incrementing, “evidence” and “prior” could be interchanged. so, how do we “count” among these (1′)s? all is about the ordering of the likelihoods (prior is a likelihood from a universal event). in fact, likelihoods do not form a chain, but a tree (graphical model).

say we have

p(b1) p(b2|b1) p(b3|b1) = 1' 1 1 = 2' 1 = 3'

here, 3′ is not constructed by a chain such as

1' 1 = 2'

2' 2 = 3'

so when 2′ is confronted by 1, a smaller number, it means it is being partially incremented. this means that the order of 1s do not matter, but we now have to mark their difference. their difference is given by the “graphical model” that conditions every increment to a set of earlier increments, e.g. b3 and b2 are not conditioned to each other, but both are conditioned to b1. incrementing is conditioning. every increment is conditioned to some previous increments. an independent increment is called a “prior” and a dependent increment is called a “likelihood”.

similar to the dependency of an increment, a decrement concerns only a given set of previous increments. one first chooses a set of increments, and either makes an increment depending on them, thus extending the structure, or makes a decrement that removes them from the structure.

due to this property, we can call these numbers, “conditional numbers” that need to be conditionally incremented and decremented, as opposed to “unconditional” or “natural numbers” that can be incremented & decremented without any conditioning.

the basic difference is that conditional numbers begin from the universal condition given by the middle dot, and everything follows from there, the conditioning is updated as the structure extends and reduces:

5 = 1+1+1+1+1

5' = · 0 1 2 3 4 (full chain)

5' = · 0 0 0 0 0 (all independent)

5' = · 0 1 1 1 1 (tree)

5' = · 0 1 2 0 1 (two disconnected parts)

...

note that this notation only shows how many dependencies that an increment has. in the full chain, this info is adequate, but in others, the dependencies must also be designated (apart from 0s which denote priors).

## 7 thoughts on “objet petit a = “less than zero” = “less than the empty set””