8/3/24, 3:18 PM
Blog, October 2024
Blog
Rastko Vuković
October 2024
HOMEPAGE
INFORMATICS
BLOG
October 2024 (Original ≽)
SITES:
October 2024 » Representations
September 2024 » Duality
August 2024 » Games
July 2024 » MiniMax
Standard Space » srb
June 2024 » Options
Question: Are there different ways of specifying the "lengths" of vectors, and what
do you do with them?
Answer: All such measurements have
their starting point in standard vector
spaces. It's the kind you learn in high
school. Adding, for example, seven to
three lengths gives ten units of length,
which is called linearity and thus
represents the shortest path.
February 2024 » Processes
January 2024 » Markov chain
December 2023 » Drowning
r = |→
r| = |(x, y)| =
√x 2
November 2023 » Freedom
October 2023 » Uniqueness
September 2023 » Differences
+
y2
= c.
Consistently, in the standard space, the scalar product for fixed P(a, b) is:
f(x, y) = p→ ⋅ r→ = (a, b) ⋅ (x, y) = ax + by.
This sum of products (f), in which the coordinates are the corresponding
information, is "information perceptions“.
We find the maximum of such a function, for example, by differentiating, then
equating the derivative to zero:
′
df
bx
= (ax + by) ′x = (ax + b√c 2 − x 2 ) = a −
= 0,
dx
x
√c 2 − x 2
x0 =
√a 2 + b 2
,
y0 =
bc
√a 2 + b 2
.
When these vectors are parallel (a, b)∥(x, y), it will be x = λa and y = λb. Therefore,
by including the proportion in the above point, c² = λ²(a² + b²).
https://rvukovic.rs/bloge/2410.html
April 2024 » Memory
March 2024 » Irregularities
In standard space, the distance measure
corresponds to the ordinary
Pythagorean theorem. Thus, the point
R(x, y) away from the origin O(0, 0) of
the rectangular Cartesian coordinate system is:
ac
May 2024 » Not really
August 2023 » Intelligence
July 2023 » Timing
June 2023 » Concepts
May 2023 » Metrics
April 2023 » Forces
March 2023 » Memories
February 2023 » Vitality
January 2023 » Minimalism
May 2022 »
April 2022 »
March 2022 »
1/21
8/3/24, 3:18 PM
Blog, October 2024
In the picture above on the right is the graph of the function f = 3x + 4y of the
condition x² + y² = 5², around the maximum (x0, y0) = (3, 4). There is λ = 1 and
max f(3, 4) = 25. We see how f increases dramatically towards its maximum value, f
→ max f, while the abscissa approaches the zero of the derivative, x → x0.
In the case of a proper interpretation, a larger scalar product will mean more
perceptual information. When multiplied vectors represent two sides in a game, a
larger product means a greater vitality of the game, which we interpret as greater
mastery. Living beings behave in opposition to the principle of lesser action, from
which theoretical physics has derived all its trajectories to date, so dead matter
behaves in such a way that it will spontaneously avoid such maxima.
February 2022 »
January 2022 »
December 2021 »
November 2021 »
October 2021 »
September 2021 »
August 2021 »
Linearity » srb
July 2021 »
Question: Where is the "quantum superposition" in vector intensities?
June 2021 »
Answer: When a quantum system
allows the states ψ1 and ψ2, then such a
system also allows every linear
combination of them:
ψ = α1ψ1 + α2ψ2,
|ψ⟩ = α1|ψ1⟩ + α2|ψ2⟩.
This second way of writing it, using
Dirac's bra-ket notation is common in quantum physics. We see this in the case of
"Schrödinger's cat," which, in a box (an exaggerated simulation of a quantum
system), can be both alive and dead at the same time until the box is opened and a
statement is made. When the prior uncertainty of the cat's state (ψ1 - alive, ψ2 dead cat) broadcasts information, more certainty remains in the box. We call this
the collapse of the superposition into one of the possible outcomes.
THEME:
Vector spaces have different
"intensities" of vectors and
corresponding "distances" between
points in metric spaces. We are talking
about this in a simple, popular, and
somewhat accurate way. All to see the
ranges of "perceptual information".
In the picture, on the upper left, there is a link that explains the same. The
objectivity of immersion, as is the structure of quantum superposition, is important
in that concept. We cannot distinguish separate elements of quantum
superposition in any way. They are not individuals until they emit some
information, which alone can increase the certainty of their structure. In addition
to this, the first of the important notes, the following, is also main.
My position in information theory is that there must be some coercion for the
superposition to collapse into one of the possibilities. This is a consequence of the
principle of "more frequent occurrence of more likely outcomes," that is,
"spontaneity of the development of states into less informative ones" (minimalism).
That "force" is in the process of measurement and interaction with measuring
devices. Physical states radiate their information spontaneously, and on the other
hand, they have no choice because there is no stale news, and the basis is of the
space, time, and matter.
White light is a superposition of several colors of different wavelengths.
Complementary colors are pairs that, when mixed, cancel each other out, lose their
color, and give a gray shade somewhere between white and black. When placed
next to each other, complementary colors create the strongest contrast between the
two colors. They are also called "opposite colors." If, for example, red is separated
from white light, its opposite green remains, or when blue is separated, orange
remains, and if yellow is omitted, violet remains.
A prism, the medium of different speeds of electromagnetic waves, separates the
color spectrum of white light. We see these waves if their wavelengths are between
400 and 700 nanometers. White light is the interference of seven colors: violet,
indigo, blue, green, yellow, orange, and red. The smallest wavelength, 400 nm, is
red, and the largest, 700 nm, is violet. They can be added differently:
ψ = α1ψ1 + α2ψ2 + ... + α7ψ7,
f(x) = a1sin(λ1x + b1) + a2sin(λ2x + b2) + ... + a7sin(λ7x + b7).
This second way of writing it, with trigonometric functions, is the custom of wave
mechanics. The amplitudes of individual waves are ak, while λk are wavelengths,
https://rvukovic.rs/bloge/2410.html
2/21
8/3/24, 3:18 PM
Blog, October 2024
and bk phase shifts (k = 1, 2,..., 7). However, that linear combination of light of
different wavelengths in classical mechanics is not an "objective superposition" as
known in quantum mechanics. It is here only a convenient example, an
approximate description of a quantum phenomenon.
We know that a phase shift turns a sine function into a cosine function. Thus, due
to linearity, we can add them to an exponential function and even to the solution of
the Schrödinger equation of, for example, a free particle:
ψ(r, t) = Ae i(p⋅r−Et)/ℏ ,
ψ ∗ (r, t) = A ∗ e −i(p⋅r−Et)/ℏ
where r = (x, y, z) is the position in time t, momentum p = ( px, py, pz) and energy
E of the given particle. The amplitude of its wave is A. The imaginary unit i² = -1,
and ℏ = h/2π is Planck's reduced constant. The conjugate complex ψ is the number
ψ*. Several such formulas can form a superposition by adding together, as in the
case of the previous formulas.
When we represent such particles-waves in orthonormalized systems of vectors,
quantum states, then their intensity is a scalar product:
ψ1*ψ1 + ψ2*ψ2 + ... + ψn*ψn = 1,
A1*A1 + A2*A2 + ... + An*An = 1,
whence the conclusion that the squares of the amplitude modules |Ak|² = Ak*Ak,
index k = 1,2, ..., n, the probability distribution contained in the superposition. See
the correct proof in the book Quantum Mechanics (1.1.6 Born rule). One of its
outcomes isolated, when the superposition collapses into one of the given states,
will reduce the uncertainty of the whole and make the rest defined; it will remain
more certain. Thus, for example, the path of an electron is created by measuring it.
In short, we see quantum superposition as a distribution of probabilities if the
quantum states are interpretations of orthonormal vectors, where the probabilities
of individual outcomes are the squares of the modulus of the amplitude.
Impossibility » srb
Question: What makes "objective superposition" different from the linear
combination of light of various wavelengths in classical mechanics?
Answer: Quantum superposition
combines forms of interference,
probability distributions, and
uncertainty relations and is "objective"
because of this third.
Heisenberg (1927) was among the first
to notice the uncertainty principle by
looking for ways to determine the exact
position and momentum of an electron
using light of different wavelengths. To
locate the position more accurately, we
need light with shorter wavelengths. However, it brings more energy to the
collision with the electron, thus increasing the uncertainty of its output
momentum. The more we nail down a particle's position, the less we know about its
speed. It is shown that the order of magnitude of those two multiplied irregularities
is of the order of magnitude of Planck's constant, the quantum of action.
Namely, the product of wavelength λ and frequency of light ν = 1/τ, where τ period
of oscillation, is the speed of light c = λν. We know the energy of light E = hν, where
h is Planck's constant, also mc² = hc/λ, and hence the photon momentum p = mc =
h/λ . That such a thing really exists and amounts to this much is routinely checked
today by means of the pressure of photons on various obstacles. Light pushes
objects.
Putting for position and momentum uncertainties, Δx = λ and Δp = h/ λ, will be Δx
Δp = h. It is a rough estimate of the inequality Δx Δp ≥ h/4π which represents the
https://rvukovic.rs/bloge/2410.html
3/21
8/3/24, 3:18 PM
Blog, October 2024
principle of uncertainty. The more precisely we know the location of the particlewave, the smaller Δx is, and the greater is Δp, the indeterminacy of its momentum.
Similarly, the resolution of a single film image is "squeezed" with the resolution of
the stream, the number of frames per second, on the video record. Thus, we
discover the impossibility of wider knowledge and the need to renounce one's work
in order to learn something else.
The image, top right, shows that a smaller aperture Š will determine a better
position of the particle-wave (along the y ordinate), but a more obscure momentum
of it on screen S. The link in that image will direct you to its further explanations.
What is true for the micro-world and quantum superposition is less and less found
in the macro-world, especially in classical mechanics. Uncertainties become
certainties through the law of large numbers in probability theory.
^ψ = xψ and momentum p^ψ = pψ of the
Indeterminacy of the process of position x
operator determined by the characteristic values (Eigenvector) we find in the form:
[^
x, p^]ψ = (^
xp^ − p^x
^)ψ = iℏψ,
^, p^]| ≥ ℏ ≥ h/4π. We can also consider
from which for commutator it follows |[x
such as information. In Quantum Mechanics (1.4.4 Uncertainty principle) there are
several more interesting ways of deriving these relations, i.e., the principle of
uncertainty. I made it as a script for private use, and that's why the text is
thoroughly reviewed and reliable. Various evidence should convince us of the
principle of indeterminacy, instead of the origin of the limitations arising from the
(wrong) measurement methods themselves.
surface III » srb
Question: The commutator is information and surface, while the sum of the
products of information of perception and intensity. Can you give examples?
Answer: On the left, we see the blue
colors of the function f = f(t) with red as
small as possible Δg(t) = g(t') - g(t'').
Those lengths are parallel to ordinates,
the axes y, and their product f⋅Δg → 0
does not represent the area separated
by the lines of the graph.
In the limiting case, these are infinitesimal products, dS = f⋅dg, which added from a
to b points of abscissa (t-axis) give sum of products, the integral:
S=∫
b
f(t) dg(t).
t=a
In my contribution from functional analysis (4.4. Proposition) you will find that the
integral written this way is a bounded linear functional of the space C[a, b],
changing x → f, where g(t) is a bounded variation function on the domain [a, b].
Also, there are explanations and proofs of these concepts. The larger such an
integral is, the greater the information of the perception and the greater the values
of the commutators that would correspond to its interpretation.
Note that changing the red function by a constant value does not change its
differences; when g → g + c remain the same Δg. As the points t1 = a, t2, ..., tn = b
abscissas give ordinates fk = f(tk) and gk = g( tk) so that fk = gk + yk, with some
increments yk, respectively for k = 1, 2, ..., n-1, that is the above sum
approximately:
n−1
Sn = ∑
k=1
n−1
=∑
k=1
f k+1 + f k
(g k+1 − g k ) =
2
(g k+1 + g k ) + (y k+1 + y k )
(g k+1 − g k )
2
https://rvukovic.rs/bloge/2410.html
4/21
8/3/24, 3:18 PM
Blog, October 2024
n−1
=∑
k=1
n
1 2
y k+1 + y k
(g k+1 − g 2k ) + ∑
(g k+1 − g k )
2
2
k=1
n−1
1
= (g 2b − g 2a ) + μ y ∑(g k+1 − g k )
2
k=1
where μy is some mean value of the middle μk = (yk+1 + yk)/2. Next, we add:
Sn =
1 2
(g − g 2a ) + μ y (g b − g a )
2 b
= (g b − g a ) (
gb + ga
+ μ y ),
2
Sn = (gb - ga)(μg + μy),
where μg is the mean value of the overall function g(t) on the interval t ∈ (a, b),
while μy is the mean value of the deviation of the function f from g at the given
points. The tags are gb = g(b) and ga = g(a). It is easy to understand that this
(approximate) result does not change significantly by pulverizing the division, then
Sn → S when n → ∞.
Otherwise, "norm" is a positive, homogeneous, and shortest measure of "length"
(∥x∥, i.e. the intensity) of the vector, and from it, the "metric" is derived as the
distance between the two vectors d(x, y) = ∥x - y∥. Then, in different metric spaces,
we define different (1.11. Examples) norms. The C[a, b] mentioned here is the take
for ∥x∥ maximum values of |x(t)| from the given interval, a ≤ t ≤ b. Then there is
the above functional whose interpretation is the sum of the products as well as the
information of perception.
1. Example. We can observe the space C[a, b] within the framework of (played and
unplayed) tennis games in general and tennis tournaments. Keeping a player at the
top of the ATP list (individuals) is the player's "intensity".
So, for the period t from 1973 to 2024, Novak Djokovic normed 428 weeks, Roger
Federer 310 weeks, Pete Sampras 286, van Lendl with 270, etc., in descending
order to 29th tennis player Patrick Rafter with only one week at the top. Behind
him, many z players did not have success, whose intensity is ∥z∥ = 0.
The distance from the first (∥x∥ = 428) to the second (∥y∥ = 310) can be taken as a
simple absolute difference of the norm d(x, y) = |428 - 310| = 118, but also d(x, y) =
∥x - y∥. The latter is the maximum difference calculated from week to week and has
the same result. □
The unofficial matches of this example are irrelevant to this norm C[a, b],
otherwise formally correctly defined. All players under 29 tennis players are zeronorm and constitute null-space, even though some of them may have won the best.
2. Example. We consider the space C[a, b] as geographical places on the earth,
with the "intensities" of points with maximum temperatures during a certain
period (a, b). It can happen, say in a desert, that the maximum temperature is very
high (72 °C), although during the night the same coordinates can have very low
temperatures, say below the temperatures ever experienced by some temperate
environments. However, the norms determined by the maxima are well-defined.
Notice that the "distance" of these places, now equal to the absolute difference of
their "intensities," has nothing to do with distances in meters. Furthermore, zero
"length" points (or vectors) can be anywhere, not just at one origin of the usual
spatial coordinate system. These zero-length vectors are kernels (2. Domains) when
Tx = 0 is understood as mapping T of the place-vector x to its maximum
temperature. □
The metric spaces of the examples are neither Banach nor do they guarantee the
function of bounded variation or the existence of the upper integral, but they help
in understanding. In the first example, if we imagine the information of perception
as a meeting between two teams of arranged sets of tennis players, corresponding
to mutual competitors, then the sum of the products of the intensity of those
players is greater if the stronger player of the first team plays against the stronger
https://rvukovic.rs/bloge/2410.html
5/21
8/3/24, 3:18 PM
Blog, October 2024
player of the second and the weaker player against the weaker one. The quality of
the tournament is better than, from the point of view of mastery, the vitality of the
games played.
In the second example, an analogously higher score of the sum of the products
means the comparative observation of pairs of hotter with hotter as well as colder
with colder places, two series of selected positions on the earth. Such comparisons
of temperature scales are more informative for the study of the Earth's climate than
any other comparison. In both examples, when the range of the vector of teams and
rows of players is smaller, they span a smaller area and give a larger sum of
products, i.e., information of perception. Conversely, matching commutators would
give a higher result.
Volume II » srb
Question: Do you have an example of the volume of "perception information"?
Answer: We normally look at "surface"
as a 2-dim, special case of "volume".
For example, in the image on the right,
the graph of the function f = f(t) is
drawn along the lower plane (Otf), next
to the graph g = g(t) on the left side
(Otg). Such are the imaginary
projections of the surface, whose points
can also be projected onto a distant
(Ofg) plane. It is the image of the
surface, say, z = z(x, y) in a rectangular
Cartesian 3-dimensional (Oxyz) to the
coordinate system.
Let's note that for a fixed abscissa, here the t-axis, this surface in the plane Otg
represents a curved line that defines some surface below it to the ordinate (f-axis).
If we eliminate the constant t = t0 from the given surface equations, we can get g =
g(f), the immediate dependency of two functions. It is a matter of elementary
mathematics.
However, by multiplying such g by f, we define the surfaces of each separate fixed
t0. Gradually changing the constants from t = a to t = b, we get the volume, the
integral (4.6. Proposition):
V =∫
b
f(t)g(t) dt.
a
This V is a bounded linear functional on the space Lp(a, b), where the parameter 1
< p < ∞, with f ∈ Lq(a, b), using parameters 1/p + 1/q = 1, and norms ∥V∥ = ∥f∥L .
With the function V ∈ Lp function f ∈ Lq is uniquely determined.
q
Norm Lp(a, b), parameter p > 1 and the domain (a, b) of the continuous variable t,
which we are talking about here, defines the "intensity" of the vector, i.e., integrable
functions x = x(t) with:
∥x∥ = (∫
1/p
b
|x(t)| p dt)
.
a
Only in the case p = 2 is this a "normal," common geometric length when
integration is used as an addition. It is again a type of metric space (Lp space) in
which also includes the previous one (Surface III), because:
p
p
lim (a 1 + a 2 +. . . +a pn )
p→∞
1/p
= max {a k }
1≤k≤n
Thus, the norm Lp(a, b) reduces to the norm C[a, b] when p → ∞. For the
"distance" from the vector (point, or function) x to y one can often take d(x, y) = ∥x
- y∥, and then in this case:
https://rvukovic.rs/bloge/2410.html
6/21
8/3/24, 3:18 PM
Blog, October 2024
d(x, y) = (∫
1/p
b
|x(t) − y(t)| p dt)
.
a
It is easy to check that the function d satisfies the conditions for the metric. In such
a metric space, previously V was a type of linear functional, so that "perception
information". Of course, there are other examples of the volume of "perceptual
information," if not rarer or more interesting.
Lp spaces » srb
Question: Why is that alternative metric space Lp even important?
Answer: That Lp, where p ≥ 1, is not an
"alternative" case of a function space
but a broader concept. Other, also
better known to us, metric spaces are
exceptional choices Lp of certain values
of the parameter p.
The elements of the space Lp(a, b) are
the functions f(x) convergent integrals:
∫
b
|f(x)| p dx < ∞,
p ≥ 1.
a
1. Example. Each bounded function on
[0, 1] is automatically Lp(0, 1) for each
value of p. But it is possible for the pnorm of a measurable function on [0, 1] to be infinite. First of all, it is the function
f(x) = 1/x for p = 1, namely:
∫
b
a
|f(x)| p dx = lim ∫
a→0 +
1
a
1
1
dx = lim ln x| a = ∞,
x
a→0 +
so f is not L1. It has a vertical asymptote at the point x = 0. □
2. Example. The function f(x) = 1/√x on [0, 1] has a vertical asymptote but still
does not have an infinite 1-norm. Namely:
lim
a→0 +
1
√x
lim ∫
= ∞,
a→0 +
1
a
dx
√x
= lim 2√x
a→0 +
1
a
= 2,
so f ∈ L1. □
3. Example. In general, on the interval of integration (0, 1) is:
lim ∫
a→0 +
1
a
∞,
r≥1
dx
={ 1
r
x
1−r , r < 1
Hence, the function f(x) = 1/xr ∈ Lp if and only if pr < 1, i.e. p < 1/r. □
4. Example. However, on the interval of integration (1, ∞) is:
lim ∫
b→∞
b
1
1
dx
, r>1
= { r−1
r
x
∞,
r≤1
So the function f(x) = 1/xr ∈ Lp if and only if pr > 1, i.e. p > 1/r. □
For more complex functions f ∈ Lp these findings of convergence around vertical
and horizontal asymptotes are generally more complex, and these examples then
gain importance. In the previous figure on the left, there are examples of circles as a
function of the space Lp with different values of the parameter p. The link to that
picture leads to some other special features of these general spaces.
https://rvukovic.rs/bloge/2410.html
7/21
8/3/24, 3:18 PM
Blog, October 2024
5. The points of the space Lp(a, b) are integrable functions f(t) whose p-norms or pintensities are defined by the integral:
∥f∥ p = (∫
b
1/p
|f(t)| p dt)
.
a
Based on this, metrics or "distances" between two "points" with dp(f1, f2) = ∥f1 - f
2∥p. There is a special norm ∥f∥∞ which is called the essential supremum of the
function f. Special variants of these spaces are ℓp which refer to infinite, in the sense
of the following norm, convergent series of numbers x = (ξ1, ξ2, ξ3,...):
∞
∥x∥ p = (∑ |ξ k | )
1/p
p
< ∞.
k=1
Generally treated in this way, we call these spaces Lebesgue spaces. They are
significant in probability theory.
6. For a given p > 1 space Lp(a, b) there exists a functional V : Lp → ℝ which will an
arbitrary point, the function f ∈ Lp, copy into some number (Volume II). The same
position further establishes that for every functional V there is a unique function g
∈ Lq, of the same domain (a, b) but q-norms such that 1/ p + 1/q = 1. We interpret
functionals with the information of perception, so that theorem means that the
perception of each subject is unique.
These two, p-norm and q-norm, when 1 ≤ p ≤ ∞ and 1/p + 1/q = 1, we call dual
exponents. For example, the exponent p = 1 has a dual exponent q = ∞, and vice
versa. The dual of the exponent p = 2 is q = 2. The dual of the exponent 3 is 3/2,
and vice versa, the dual of the exponent 3/2 is 3.
7. Proposition. For the mentioned duals, 1/p + 1/q = 1, Young's inequality holds:
ab ≤
bq
ap
+ ,
p
q
for all a, b ≥ 0 and 1 < p < ∞.
Proof: We fix b and define a function:
f(x) =
xp
bq
+
− xb,
p
q
f ′ (x) = x p−1 − b.
From the derivative of the function, we see that it is decreasing on the interval (0,
b1/(p-1)) and increasing on the interval (< var>b1/(p-1), ∞). Therefore, its minimum
is at the stationary point x = b1/(p-1). Hence f(x) ≥ 0 for all x ∈ (0, ∞), and that is the
proposition. ∎
This can be used to prove special inequalities, very useful for functional analysis,
probability theory, and information theory. Schwarz inequality, Hölder's inequality,
and Minkowski's inequality are often mentioned there, with proof. For example, a
consequence of Holder's inequality is the inequality of the norms of different
exponents:
∥f∥p ≤ ∥f∥r <=> 0 < r < p.
Increasing the exponent p of the space Lp decreases the intensities ∥f∥p of its
functions, but the distances between them also decrease dp(f, g) = ∥f - g∥p. Larger
exponents of the functional V : Lp → ℝ will be determined by more intensive
generators g ∈ Lq of dual space. It takes more vitality to perceive less vitality.
Vitality » srb
Question: Why "does it take more vitality to perceive less vitality"?
https://rvukovic.rs/bloge/2410.html
8/21
8/3/24, 3:18 PM
Blog, October 2024
Answer: This concludes the last section.
Interaction is always "communication,"
that is, the exchange of some
"information," which is represented by
functionals. That contribution (Lp
spaces, 6) mentions a vector space Lp of
the norm that uses exponents p ≥ 1,
whose vectors, its functionals, such as
V, will map to numbers.
However, vectors are also linear
operators, there (Lp) and points of metric space, then functions in general.
Representations of vectors are quantum, and in general, the state of the system,
and this goes all the way to information of perception, which is a representation of
a functional. The information of the object itself in that space is represented by the
function f = f(t), which we can understand as a process around some subjects. The
theorem further states that the coupling, the communication of a particular subject
g = g(t) with the environment, and otherwise the information of the perception of
the subject and the object is unique.
Each linear mapping (V, functional) of states (f) into numbers has a unique subject
(g) so that it can be written by the above by the integral (V = ∫ fg dt) in the given
interval. At the same time, the states of the p-norm are copied (f) by the interaction
of subjects (g) of the dual q-norm, for which 1/p + 1/q = 1. We see from the
condition that a larger p gives a smaller q, and vice versa. A larger p-norm of an
object will correspond to a smaller q-norm of a unique subject, and vice versa.
The reverse is also true: a smaller exponent p of the environment (measures Lp)
corresponds to a larger dual exponent q of the subject (measures Lq) unique by
perception. Let's compare that, as well as the growth of the "amount of
observations" with the "amount of options" (information) the observation
participants (p-norm):
∥f∥p ≤ ∥f∥r <=> 0 < r < p.
These inequalities say that for larger exponents p, of the space Lp, there belong
smaller p-norms of the points. These are smaller norms of the f process, that is,
reduced measures of "information intensity." Also, we are talking about the present
amounts of uncertainty of given objects, that is, about the powers of perception of
given subjects. Smaller ambient intensities ∥f∥p require higher intensities ∥g∥q
unique observers (generators g). On the other hand, this is again logical in its own
way.
Namely, vitality is the excess of a system (a living being) concerning a simple
physical substance (non-living matter). The above observations will also apply if
the "amount of options" of a given system (subject) is greater than the minimum,
which does not have to follow the principle of least action in physics. Then, among
other things, it can lie. We realized a long time ago (Not really) that there is no
discovery of mathematics without lies (not only by the method of contradiction) or
understanding of physical reality (truth and only truth) without that excess, i.e.,
vitality.
It is not possible to prove the inaccuracy of something physically real that follows
the principle of least action and has no further options. Vitality adds to it the power
to choose and lie, which means that dead physical matter does not know itself,
meaning it is not aware of itself. In other words, it takes vitality to know the
absence of vitality. Plants and primitive animals are not self-conscious, and again,
we find that it takes more vitality to know less vitality. Likewise, it takes more
vitality to perceive less (amount of) vitality.
Basis II » srb
Question: How is it that an arbitrary integrable function f(t) is a vector?
Answer: We continue the conversation about functionals (Lp spaces). Now pay
attention to the image on the left and the vector:
https://rvukovic.rs/bloge/2410.html
9/21
8/3/24, 3:18 PM
Blog, October 2024
−
→
OT = a 1→b 1 + a 2→b 2 + a 3→b 3
whose base vectors are denoted by bold
letters. For basis, we can take anything,
but only if they satisfy the axioms of
these vector spaces, which is the
possibility of (linear) extension.
Concatenating a1 times the base vector
b1, then continuing a2 times with the
base vector b2, then a3 times with b3,
from the origin O of the coordinate system, we reach the point T. Linear
independence of base vectors b1, b2, and b3 is actually the reason that these
coefficients a1, a2, and a3 are uniquely determined by point T. Here's how.
If there were two T point records in a given database, there would be:
a'1b1 + a'2b2 + a'3b3 = a1b1 + a2b2 + a3b3,
(a'1 - a1)b1 + (a'2 - a2)b2 + (a'3 - a3)b3 = 0,
a'1 - a1 = 0, a'2 - a2 = 0, a'3 - a3 = 0,
because otherwise one of the base vectors could be (linearly) expressed with the
other two, which is impossible if they are linearly independent.
Let's further imagine that such a system has many basis vectors, an infinite
sequence bt, whose index is discrete t = 1, 2, 3, ... a number of a countably infinite
set of numbers, even more numerous, when the index belongs to some interval
(continuum) of real numbers t ∈ (a, b) ⊂ ℝ. Then, instead of at, we write f(t), and
instead of bt, we write g(t). However, instead of a continuum of base vectors, we
usually work with norms (intensities) of linear combinations (sums), which means
integrals.
The most discrete given base, dimension n → ∞, form unique sequences of vectors
u = (a1, a2,..., an) combined by v = (b1, b2,..., bn), otherwise the coefficients of
connecting base vectors to a given point. Linear functionals, or mappings of vectors
u to numbers, are uniquely represented by vectors like v as a scalar product
u⋅v = a1b1 + a2b2 +... + anbn.
The first and second strings are dual vectors (Variance), analogous to those dual
spaces Lp and Lq that we are working with now. In the case of uncountably infinite
bases, we arrive at the functional V : f → ℝ, volumes:
V =∫
b
f(t)g(t) dt,
a
which maps the elements f(t) = ft in the base gt in a unique way. No matter how ndimensional are the given vectors u = (a1, a2,..., an) and v = (b1, b2,..., bn) they
span one 2-dim space. Similarly, in the continuum of dimensions of these
integrable functions, we manage to understand the functional V as a 3-dim volume.
Finitude » srb
Question: Why do we only notice finities, when infinities are supposedly just as
real?
Answer: What we perceive differs from what we understand, because
understanding comes with vitality and the ability to lie. Still, life does not have such
additions, and we take the bare nature as pure "truth" and pure "perception".
However, we cannot all have the same vitality or what comes with it. Also, for one
and the same subject, there is no equal perception of all kinds of objects, and
infinities and abstractions in general are not fronts of such lists. In other words, our
https://rvukovic.rs/bloge/2410.html
10/21
8/3/24, 3:18 PM
Blog, October 2024
experiences of infinity are as real as
similar mathematical truths, except that
they are harder to perceive.
The immediate perceptions of dead
physical matter therefore always come
in finite portions, although, with a little
vitality, the dead matter might realize
its infinite nature. This is the same
result presented here before, but notice
that it is now performed in a different
way (Packages). Illustrated graphically, the dead matter will not cross over to the
other side of the stream of lies, unlike vital subjects. The recognition of dead matter
thus limits its truthfulness to some parts of reality, although others also exist.
The ways of perception hidden behind p-norms and the representation of
functionals also contribute to this variety of reality. Only the usual Euclidean space,
be it physical space or some of the mathematical forms, which are 2-norms, will be
perceived by a unique subject, also 2-norms. Otherwise, the space p-norm will have
as a unique observation of the subject q-norm, where 1/p + 1/q = 1, where the
intensity of the same observation is ∥x∥q ≤ ∥x∥p if p ≤ q. See also the proof of this
inequality (Proposition 20).
For example, a probability space of norm p = 1 has a dual subject of a unique
perception from a space of q = ∞ norm. Translated, this means that an extremely
large uncertainty can only be uniquely perceived by an equally large certainty. Just
a little less uncertainty, p close above to 1, will need a correspondingly large
certainty, that is q = p/(p - 1). It seems that we must come precisely from the world
of large numbers (large certainty) to be able to perceive the world of quantum
mechanics (large uncertainty).
On the other hand, in a world of great uncertainty (small p), precisely because of
great randomness, greater interactions or communications within or around great
certainties (dual q) are possible. A subject who adheres to the rules and is
surrounded by a multitude of rules is deprived of many things.
Distances III » srb
Question: Can you clarify that "distance" by the "p-norm"?
Answer: In the picture on the left is the
first quadrant of the "circle":
x p + yp = 1
in the spaces Lp of integrals, together
with string space ℓp, with several
exponent values:
p ∈ {1/3, 1/2, 1, 2, 3, 4, 5}.
Only p = 2 gives the well-known
Euclidean circle of circumference 2π⋅1.
The circumference of all others is
greater than 2π.
However, by taking the pth roots of
those arcs, the left-hand side of the above "circle" equation decreases because:
. . . < (x 3 + y 3 ) 1/3 < √x 2 + y 2 < x + y < (√x + √y) 2 <. . . ,
and the right side remains the same (the root of the unit). Therefore, the
coordinates (x and y) must increase to maintain the sum, thereby increasing the
detour outside the Euclidean arc. That the p-norm is a decreasing function of p, for
p ∈ (0, ∞), is the proof in my contribution (Inequalities, proposition 20). But,
because of the objection that it is incomprehensible to them, here is another one.
https://rvukovic.rs/bloge/2410.html
11/21
8/3/24, 3:18 PM
Blog, October 2024
We consider n-tuples of real or complex numbers x = (ξ1, ξ2,..., ξn), and we see
them as the coordinates of a point or vector x, intensity p-norm:
∥x∥p = (|ξ1|p + |ξ2|p + ... + |ξn|p)1/p
1. Proposition. For (extended) real number 1 ≤ p ≤ ∞ and the array x = (ξ1, ξ2,...,
ξn) whose coefficients are numbers, mapping:
p → ∥x∥ p = (∑ |ξ k | )
n
1/p
p
k=1
is monotonically decreasing. In other words, from p > r ≥ 1 follows ∥x∥p < ∥x∥r.
Proof: We use the simple norm homogeneity principle. Without loss of generality,
we assume that ∥x∥r = 1 and also |ξk| ≤ 1. Then that |ξk|p ≤ |ξk|r for all k = 1, 2,..., n.
Hence:
n
n
k=1
k=1
∑ |ξ k | p ≤ ∑ |ξ k | r = 1,
∥x∥ p = (∑ |ξ k | )
n
p
1/p
≤ 1 = ∥x∥ r ,
k=1
and that is the desired result. ∎
If you take my word for it, you don't even need to read the proof: with larger values
of the positive exponent p, the value of the p-norm is smaller. And that is that. If
you want a little more detailed proof, there is one by derivation (P-Norm) or others
that you can search for yourself.
Using the p-norm, we define the metric by a distance function:
d p (x, y) = ∥x − y∥ p ,
d p (x, y) = (∑ |ξ k − η k | )
n
n
1/p
,
k=1
where y = (η1, η2,..., ηn) also n-tuple of real or complex numbers. From the
previous text is dp(x, y) < dr(x, y) whenever p > r ≥ 1. All this, from finite
sequences, is generally transferred to infinite sequences (space ℓp) and to integrable
functions (space Lp in the narrower sense).
Considering the contraction of lengths and the shortening of dp(x, y) by the growth
of the exponent p, what seemed further so it gets closer. To travel the same
previous intensity of the path from the point x to the point y, the journey becomes
longer, as far as the relative observer is concerned, for whom these lengths remain
the same. Like the theory of relativity, here we also distinguish between an intrinsic
(proper) and a relative observer. The relative perceives the one with greater p
reduced "intensity" and, shall we say, less informative (Finitude). An analogy with a
distant object that therefore becomes smaller and less detailed helps us.
For information theory, I use various other interpretations of metrics (Distances
II), in addition to p-norms, for easier representation of certain phenomena.
However, I do not expect that they could be mutually contradictory. And if it did,
that would be a serious weakness of the theory itself.
Distribution » srb
Question: The space of great uncertainty, p = 1, has its dual subject of unique
perception from the space of great certainty, norm q = ∞?
Answer: That's right, this is the basic content of the theorem with p-norms (4.7
Proposition), although it is not dealing with interpretation from questions.
https://rvukovic.rs/bloge/2410.html
12/21
8/3/24, 3:18 PM
Blog, October 2024
We base our interpretations on the
interpretation of elements of metric
spaces as physical states and norms as
the sum of "intensities" of potential and
actual physical phenomena. The Skolem
theorem gives us the right to say that
the deductive theory w, which has an
application to some narrower practice
W, also has wider applications to the
practice V ⊃ W with its extension v, such that v = w everywhere on W ⊂ V. These
exact theories are inevitable parts of unique concrete phenomena, but they are
constituents of bare physical matter that occur in countless other similar,
unrepeatable states.
Forms often repeated in various
phenomena, or rather, what is repeated
in forms, are theoretical, abstract
things. In contrast to bare concrete
phenomena that are unique.
However, the subject of theories is also
numerous unrepeatable phenomena, as
evidenced by the paragraph mentioned
above (4.7. Proposition).
The same attitude says that the
bounded linear functional f(x), or the
function that from L1 maps the
functions x(t) into numbers, 1-norm
space, interval I = (a, b) numbers t, has the representation f(x) = ∫I y(t) x(t) dt,
where the dual function y(t) norms q = ∞ unambiguously determined. Norms p = 1
and this q are dual; for them, the equality 1/p + 1/q = 1.
This answer also belongs to the previous question (Finitude), the part where we
deduce the decreasing norm ∥x∥p of the increasing exponent p of the same point x,
which determines 1. Proposition of the following previous answer applied to
integrals. The interpretation of that and this position is the existence of a unique
observer in conjunction with the information of perception, so the state x with y,
where the first comes from p = 1 and the second from q = ∞ norms, as uncertainties
from positions of certainty.
Although the points of space ℓ can be probabilities (L probability densities), they
need not be probability distributions. Then, say, the stochastic matrix A (Markov
chain) maps the probability vector x to the probability vector y in the manner (y =
Ax):
⎛η 1 ⎞ ⎛a 11
η 2 = a 21
⎝η ⎠ ⎝a
3
31
a 12
a 22
a 32
a 13 ⎞ ⎛ξ 1 ⎞
a 23
ξ2
⎠
⎝
a 33
ξ3 ⎠
where the columns of the matrix A are non-negative numbers, each sum to one,
while the elements of the vector x and y, also non-negative, can have an arbitrary
but mutually equal sum of the 1-norm. We easily prove the latter:
3
3
3
3
3
i=1
i,j=1
j=1
i=1
j=1
∥y∥ = ∑ η i = ∑ a ij ξ j = ∑ (∑ a ij )ξ j = ∑ ξ j = ∥x∥.
Various interpretations of this equality are possible, and one of them is the law of
conservation (total probability), although the matrix A is not isometry in the sense
that its determinant is not unity.
Views on representations of functional determine optimality when dual systems are
observed, here exponents for which 1/p + 1/q = 1 holds, or dual spaces of
functionals X* and vectors from X that map these into numbers, or co- and
counter-variant tensors (Variance). The mentioned position determines the
optimum of observing one from the point of view of the other in all such cases,
although their connections are not visible at first glance.
https://rvukovic.rs/bloge/2410.html
13/21
8/3/24, 3:18 PM
Blog, October 2024
In particular, this is about the optimality of communicating greater uncertainty
with exemplary greater certainty. It takes a lot of vitality to grasp the quantum
world of uncertainty. The reverse is also true: uncertainty processes will be more
pervasive in rigid systems than in defined, well-ordered, and "obedient" systems.
Conjugated » srb
Question: You claim that the subject achieves its uniqueness thanks to the
communication in which it perceives objects as a dual world and that therefore
what we see is not really what it is?
Answer: It may sound pretentious that functional analysis interferes with, let's say,
Plato's world of ideas, but the statement is much more accurate than it seems at
first glance. In this sketch, an object of height H is at a distance p from a lens of
focal length f, whose light is projected to the other side into an image of height h at
a distance q from the lens. It is a conjugate focal plane which is accompanied by the
well-known equation in optics:
1
1
1
+ = .
p
q
f
Camera manufacturers can change little in that equation, apart from choosing
millimeters or centimeters to measure the focal length, but in mathematics, we can
adjust the units of length to be f = 1. Such treatment of optics becomes an obvious
application of functional analysis.
In the fields of such mathematics, a dual space is the space of continuous linear
functionals on real (ℝ) or complex (ℂ) Banach space. The dual space of a Banach
space (labeled X) is again a Banach space (usually labeled X*), when endowed with
an operator norm. The dual X* of the Banach space X is also called its adjoint, or
conjugate.
In algebra, adjoining treatment is consistent with mathematical analysis and
specific to another topic. In optics, in the sketch above, the conjugate plane X* is
copied given the plane X of the original, the object into its image. The greater the
distance p, or more precisely, the greater the quotient p/q, the greater the height
ratio, H/h, of the object and its image. The position (Distances III) about the
decline of the norm ∥x∥p with increasing exponent p.
When the image is in focus, with the appropriate q, its blur is the least with the best
possible image resolution. In this optimal case, we have a unique (obscurity can
hide different objects) and a detailed representation of the object, as far as
photography can achieve. The reverse is also true. The flow of light from the copy to
the original would follow the same paths backward, symmetrically, as the mappings
of the spaces ℓp and ℓq, or analog functional analysis.
On the other hand, we also have an independent addition to this in the immediate
cognition of physical reality (Finitude) given that what we perceive cannot be what
we understand, because there is no mathematical knowledge without vitality and
the ability to lie. The need for false assumptions to prove the truth is not a
possibility that still life possesses, and therefore the truth we would find about it is
not exactly what it might have about itself.
Provenance » srb
https://rvukovic.rs/bloge/2410.html
14/21
8/3/24, 3:18 PM
Blog, October 2024
Question: Does conjugation of complex numbers have anything to do with this,
with adjoint and dual norms?
Answer: Yes, those concepts are a series
of generalizations; let's reduce them to
chains of meanings. At the bottom of
the arrays are multiplications of real
numbers, or squaring. Among such,
let's add the conjugated complex:
|z|² = z z* = (x + iy)(x - iy) =
= x² - ixy + iyx - (iy)²,
|z|² = x² + y²,
multiplication of numbers in the picture
on the right, z = x + iy and z* = x - iy, where the imaginary unit is i² = -1. The point
z is reached with x units of the real axis, steps along the abscissa, and then with y
imaginary steps (ordinate). The conjugate point z* is the reflection of z over the
abscissa.
The picture shows the plane of complex numbers ℂ and we have it in mind for each
coefficient of the vector, n-tuple of numbers x = (ξ1, ξ2,..., ξn). This and that x are
not of the same meaning, which is not a problem if you follow the sense. The
meaning of the previous z is now taken by each of these components ξk ∈ ℂ, in
order of index k = 1, 2,..., n. When we look at the vector x together with the vector y
= (η1, η2,..., ηn), although both are from some numerous n-dim systems coordinate,
reduced to a common origin, they span only 2-dim space. Arrays are a type of
vector.
Reduced to one plane, we multiply these vectors Hermitian:
⟨x, y⟩ = x⋅y* = ξ1η2* + ξ2η2* + ... + ξnηn*
where the first notation, the Dirac bra-ket parentheses, of the given scalar product
characteristic of quantum physics, and the other with a point between vectors is
more common in mathematics. I use both equally because I see their meaning in
the information of perception. We then generalize this to quantum adjoint which is
both conjugation and transposition (replacement of species and columns). For
example, for x = (3, 1-2i, -4i) and y = ( 2i, -3+i, 4), will be x† = (x*)⊤ and y† = (y*)⊤,
that is:
3 ⎞
x = 1 + 2i ,
⎝ 4i ⎠
†
⎛
⎛ −2i ⎞
y = −3 − i .
⎝ 4 ⎠
†
The basis vectors and these Hermitian adjoints look and behave as co- and countervariance, and then as complex numbers. Namely, by matrix multiplication, we will
get x x† = 30 and y y† = 25. As matrix multiplication is not commutative, reversing
the order of multiplication will give different results.
Communication at the quantum level is more random than in the macro-world,
where certainties increase consistently with the laws of large numbers in
probability theory. In a world of our size, we normally do not encounter the
"impossible" occurrences of small probabilities that would be common in that
world of tinyness. We do not come across "bypasses", nor do we easily understand
some deeper meaning of complex numbers.
However, all common forms of vector spaces with vectors like x or y also have
linear mappings (A) that can mutually copy these trivial vectors (y = Ax), but also
others. They can map similar maps. The results of sum of products ⟨x, y⟩ are more
visible to us and more significant when they are real, and that is why quantum
mechanics overestimates hermitian operators, I believe. Only real eigenvalues λ
from the eigenequation Ax = λx are observable.
Adjoining linear operators is analogous to vectors, so in the matrix representation,
we have:
https://rvukovic.rs/bloge/2410.html
15/21
8/3/24, 3:18 PM
Blog, October 2024
†
1
2i
1
−4i
A† = (
) =(
).
4i −1 + 3i
−2i −1 − 3i
These operators are Hermitian only when A† = A, which is clearly not the case here.
However, the product operator B = AA† is Hermitian.
Hermitian operators have real eigenvalues, although their matrix representations
(otherwise different in different bases) are often with complex coefficients.
Quantum physics sticks exclusively to dual exponent spaces p = q = 2, but I'm sure
it will get richer over time.
Reflection » srb
Question: What does functional analysis say about the copy of the copy?
Answer: When X and Y are Banach
spaces, complete and normed vector
spaces, then is the set of all linear
mappings from X to Y again Banach
space. If Y is a set of scalars (real ℝ, or
complex ℂ numbers), then we denote it
by X*.
The set of linear mappings from vector
X to scalars Φ (ℝ or ℂ) is a set of
bounded linear functionals, the label
space X*, which we call the dual,
adjoint, and conjugate spaces X. Many functionals have an interpretation in
Information of Perception, and that is why these areas of mathematics, algebra,
and analysis are particularly interesting to us. Namely, a vector is a particular
"state" over a given system (vector space), mapping is a "process" of such, and
"information" is their measure.
The measure of information is also the norm of a vector. Although these
"measures" are not conceptually identical, formal analysis is facilitated by
observing the mapping between them and in the eventual bijection (mutually
unique mapping) when we say that these spaces are algebraically "isomorphic" (a
structure-preserving mapping that can be reversed by inverse) and "isometric" (a
distance-preserving transformation). When such spaces have the same size and
shape, we say that they are "congruent".
The mapping of the mapping from X* to scalars (to real or complex numbers) is
again some Banach space, notations X**. We also call it the "second conjugate"
space of X. It is possible to prove that the Banach space X is congruent to some
subspace of X**. It can be written X ⊂ X**. When X** = X, we say that the space X is
reflexive, and if X** ≠ X space X is irreflexive.
We know that every finite-dimensional normed linear (vector) space is reflexive.
Namely, for the dimension n of such, the following applies:
dim X = dim X* = X** = n,
and mapping π : X → X** is a bijection, so π(X) = X**. In other words, X is a
reflexive space. Hence ℝn and ℂn are reflexive spaces. This results in the reflexivity
of the "space" has information of perception. It is also the formal equality of the
event space (X) with the space (X**) of all bounded linear mappings of the
functional (X*).
The equality of the state of nature (X) with the information images of perception
(X**) is a type of congruence if the former is finitely dimensional. Since it excludes
infinities, this theory remains valid as long as the second conjugate remains
reducible to a finite-dimensional space. We know that it is impossible to know the
"essence" of nature (if such exists), and with this, we know that we have its
equivalent.
https://rvukovic.rs/bloge/2410.html
16/21
8/3/24, 3:18 PM
Blog, October 2024
Sine Wave » srb
Question: How do the extra dimensions fit into all of this?
Answer: As anticipated, mathematics
does not struggle with "excess"
dimensions, but the requirement of
"objective randomness" implies
additional dimensions of time.
In the picture on the right is a link to a
video of 3-dim sinusoids. In the
complex plane, the point z = x + iy
moves on a circle, and in the same
plane, x = Re(z) and y = Im(z), slides along the time axis t. The red dot z describes a
spiral wrapped around the t-axis and projected onto the planes xt, yt, and xy, which
becomes respectively a sinusoid x = sin(ωt), a cosine y = -cos(ωt), and the
trigonometric circle x² + y² = 1.
The equation of such a point in the plane ℂ it can also be written like this:
z = eiωt = cos(ωt) + i⋅sin(ωt),
where imaginary unit means i² = -1. In one way or another, those projections in the
two planes mentioned above remind us of the vertical planes of the electromagnetic
wave (light). Let's take this light "actually" as moving through 3-dimensional space
in time like the upper sinusoid. At the same time, we only see its projections on two
orthogonal planes as its electric and magnetic phases, which mutually induce each
other. There will be no obstacles to the mathematics of that representation.
In the case of light, which walks through excess dimensions and projects its electric
and magnetic phase to us, we would have "bypass" and an additional law of
conservation through these excess dimensions, which would, in turn, be physically
realistic and experimentally measurable. With mathematics, it is not possible to
dispute the existence and deeper, then pseudo-reality of phenomena similar to the
above complex sinusoid. Pseudo-reality, according to this theory, goes with the
absence of conservation laws (Outsider).
Note that the "law of conservation" is relative. The process that flows in another
time dimension is temporally incomparable to ours—the direction or intensity of
the past and future—so it makes no sense to talk about some universal amount of
change and its maintenance. However, if pseudo-reality is missing, for example,
due to "circumvention," then by applying 3.3. Proposition: If Y, Z ⊂ X
complementary spaces, then to each x ∈ X uniquely matches some y ∈ Y and z ∈ Z
so that x = y + z, those external realities become ours. Moreover, their singularities
thus grow into extended commonalities.
Time gains importance, which should also be natural to information theory, so the
answer to the following question I received recently is: How should I briefly
describe "swindlers"? — These are the ones who are not able to do what they
promise, if you mean the II League players of this classification (Traits). Firstdivision players predict well, and third-division players are situational (they would
only combine good with good). The ability to use time belongs to vitality, and both
complement abstract analysis.
Actual vs Ideal » srb
Question: What are the comparative characteristics of concrete and ideal?
Answer: In theory, theory and practice
are the same; in practice, they are not,
said Einstein. A gram of practice is
worth more than a ton of theory, as
many others have said. Nothing is more
practical than a good theory, which
third parties would have us believe. And
here is something about it from the
point of view of my information theory.
https://rvukovic.rs/bloge/2410.html
17/21
8/3/24, 3:18 PM
Blog, October 2024
Both theory and practice are matters of our perception. However, physics cannot
prove, say, the Pythagorean theorem, nor can it dispute it. If, by counting in the
meadow, we find that "two cows plus two cows equal four cows," there are too
many objects in the universe to confirm experimentally that "2 + 2 = 4." So, we
agree that there are differences in subjects and ways of studying physics and
mathematics.
On the other side of our efforts to find an accurate statement of mathematics lies
the honesty of physical phenomena and their inability to lie, that is, to make
inaccuracies. The ability to lie requires us to know an accurate statement (Not
really), which is not available to concrete things themselves. Hence the idea of an
"excess" amount of options (information), which I call "vitality," that a living being
has over the physical substance of which it is composed. That idea is not the ideal
in the given question; it is a polluted ideal, but one that helps us notice the
difference between true and false fiction.
Let us then notice that the truth is persistent and seems somewhat unattractive,
unlike lies. Be it a concrete or an abstract phenomenon, each of these truths is
permanent in its own way. For the first one, we can claim that it follows the "law of
conservation" known to us from physics, but we can analogically transfer the same
to the second one. The first one is constantly changing, and this one talks about the
total amount of a closed system, let's say, the energy that changes forms, which
defines abstractions of truths that last, are timeless, and are omnipresent.
Because of the variability of forms captured by the law of conservation of quantity,
for example, physical action is quantized. It occurs in the smallest packages and
communicates using their equivalent physical information. Because our macrotime is of equal intervals and the action is the product of ΔE⋅Δt, the changed energy
and the elapsed time, the law of conservation of energy applies together with the
corresponding information.
The law of conservation of the total amount of variable forms is the cause of their
finite divisibility because infinity can be its proper part. Thus (Sets), even {2, 4,
6,...} and odd {1, 3, 5,...} numbers are as many (ℵ0) as all natural numbers ℕ = {1, 2,
3, 4,...}; however, "three cows are more than two cows". Formally, zero is also a
number, so we can formally consider certainty as zero information. Such
information is "news" that we know will happen, and when it happens, it is not
news.
Zero information can be present in unlimited quantities in everything concrete,
even in every quantum of action, and, as the Skolem theorem says (LöwenheimSkolem), every theory of cardinality ℵ0 also has a higher cardinality (for example,
the continuum), and every theory of higher cardinality also has countable infinity
(ℵ0) occurrences, such are literally ubiquitous. How their uniqueness comes about
is specified in the previous question mentioned: 3.3. Proposition. There are copies
of parts everywhere, and combining them results in less repeatability.
The ability to theorize about things will reveal the presence of theories in them, and
the uncertainty that hides them will also indicate their information. These would be
some basic comparative characteristics of the concrete and the ideal, with
important observations that, for our understanding, the former are inseparable
from the others, and conversely, that the latter, due to the necessary presence of an
inconstant lie, are inseparable from the former.
Finally, here is another lesser-known contribution to these characteristics. A lie is
not physically real and, therefore, spontaneously disappears. While the local
quantities of falsehoods are constantly disappearing, their total potential persists
because we can replace the truths with falsehoods, ⊤ → ⊥ and at the same time ⊥
→ ⊤, in all formulas of the algebra of logic. Hence the equivalence of the "world of
truth" with the potential of the "world of lies," as well as the equivalence of
tautologies and contradictions.
Deception II » srb
Question: Can the spontaneous disappearance of lies replace minimalism?
Answer: I've tested that idea several times (Deception) and found things
(Memories), but not what I would expect. Fiction is a large class of diverse and
https://rvukovic.rs/bloge/2410.html
18/21
8/3/24, 3:18 PM
Blog, October 2024
interesting phenomena with lies,
perhaps a large but not their only type.
It is always possible to write about them
in a new way.
Reduction to contradiction as a method
of proof in mathematics (Numbers) can
also be seen as a process of cleansing
oneself of untruths. What remains in
the end is a mere formal truth and the
fortunate circumstance that it exists at
all. The impossibility of arriving at truth
without "cleansing" is a message that still life, or anything woven from pure truths,
is not in a position to perceive itself in our way. Our physical nature image and the
original are not the same, and I have discussed this in other ways (Conjugated).
Lies and other fictions attach to the physical body and persist; in return, that body
is given a structure with vitality and mortality. As they last, exchange effects, and
reproduce, the resulting living beings expand into three histories: personal, family,
and cultural. It's unusual to write about forms like this, but let's remember that
"surviving" means having an increasingly long past at the expense of a thinner
present, which can be changed with principled "minimalism". Thus, the law of
conservation of physics covers the principle of least effect, and this is further
interpreted by spontaneous development into less informative states, i.e., more
frequent occurrence of more likely outcomes. A lie almost achieved that.
Inanimate physical matter is structured in such a way that it hardly accepts forms
that we could call "incorrect expressions." With that in mind, we simply say, "A lie
is repugnant to the truth." Otherwise, life would be more easily created by the nonliving, and violations of the principle of least action of physics would be more
frequent. We see the same with vitals. For example, criminologists know that the
brain works harder when making up lies than when following the truth. On the
other hand, a lie, as a diluted truth, is more attractive to passive listeners. So much
for this complex topic, which we leave now until a better opportunity.
Removing assumptions and replacing them with opposite statements, if they lead
to contradictions, gives pure truths. However, this method is not easy. For example,
the Russian mathematician Lobachevsky (1793–1856) worked persistently with the
negation of Euclidean's 5th postulate of parallels, unsuccessfully searching for a
contradiction. He obtained a geometry in which the sum of the angles of the
triangle is less than 180°, in which the ratio of the circumference to the diameter of
the circle is greater than π and from which point outside the given line at least two
parallel lines can be drawn. He worked for many years to establish that there is no
contradiction, that the 5th postulate cannot be proven from the other four, and that
he has a new geometry that is just as accurate as Euclidean. He was lucky enough to
find evidence of the latter.
In this theory, the past is a kind of augmented reality. Such are the other
dimensions of time, or the truth itself, but not all of their aspects are available to us.
This accessibility is prohibited by the principled objectivity of uncertainty, and
hence it is not convenient to tie the starting point of the theory closely to the
provability of lies. Otherwise, I would do that too. In that, I will emphasize that it
does not say that the negation of lies is not true, but only that it is not the first
method of choice.
If every correct theory is non-contradictory, whether it is provable to us or not,
then we can say that the impermanence of lies has the principle weight of the law of
conservation. However, not every statement is provable, not all equations are
solvable, and not every future is predictable. This is the reason that the
"spontaneous disappearance of lies" leaves aside the principle of minimalism.
Dissonance » srb
Question: I am also interested in the mentioned "lie as a diluted truth" as a "until a
better opportunity" theme. Can you say something about that now?
Answer: We know we prefer the familiar (Unstable). We generally do not
understand that this is a consequence of the general tendency towards greater
https://rvukovic.rs/bloge/2410.html
19/21
8/3/24, 3:18 PM
Blog, October 2024
probability, less information, and less
action, so we only see laziness, one of
the links in that chain of causes.
Cognitive dissonance (see image link)
consists of inconsistent thoughts and
beliefs, especially in decisions about
behavior and changing attitudes. It is
the mental discomfort created by
holding two opposing beliefs, values, or
attitudes. Striving for consistency is the
cause of this conflict and the resulting
unpleasant feelings, while all striving
for consistency comes from the principle of minimalism.
Given that physical information is equivalent to an action, it does not exist outside
of quantum, and these are manifested through interactions and, in the end, always
by movement, we are always talking about the transfer of physical information
through space-time. Due to the relativity of movement, the aforementioned
"striving for less" is an effort for smaller changes, i.e., less movement. This brings
the inertia of the body, and with it the legality of action-reaction, to the fore. The
level of given values is downplayed against efforts to change it.
Formally, let's break down that "inertia" into two components: the principled
desire for less information and the law of conservation. So let's explain in two ways
that "a lie is repugnant to the truth" (Deception II), so dead nature doesn't want it,
doesn't understand it, and doesn't follow it, and on the other hand, that "the brain
likes the familiar." We and other living beings are trapped by our physical
substance and vitality, each in our zone of conformity, below which the values of
the "amount of options" (information) are lower than those at our disposal, within
which we are not free, and on the upper side, which scares us with an excess of
options and excess freedom.
Just like the lower limit of conformity mentioned here, below which we desire
freedom, psychology also knows the upper limit. Fear of the unknown refers to
anxiety about unpredictable situations or events. It is associated with things that
we consider unfamiliar or strange. Fear of the unknown can also occur when we
lack information. Another name for this condition is intolerance of uncertainty.
More subtle changes are also present. Structures tend towards greater efficiency by
evolving into less informative (Extremes), because the distribution of equally likely
outcomes is more informative than those of different chances. Hence, the network
of peer links is grouped into few nodes with many connections and many nodes
with fewer connections, so we get "six degrees of separation" such. This is how
synergy and its related emergence arise, otherwise known as strange and
inexplicable phenomena of complex structures that their parts do not have, in
which they appear only through interaction in a larger whole.
Even more abstract is the following observation: From the same Extremes (1.6) is
the position that, with the given expectation, μ = 1/λ the most informative is the
exponential probability density distribution g(x) = λe-λx, for the variable x ≥ 0.
Such is undesirable, unstable, and absent in physical reality. But when there is no it
in things of truth, we find this distribution in fictions.
That's why the lie can be in exponential decline, most important at the beginning of
the broadcast and quickly becoming irrelevant over time. It is a great "bomb" for
agitations when we throw it out like "feathers from a pillow" that are harder and
harder to collect later. A lie spilled at a given moment will create a media effect that
then remains even after it is denied. The rapidly disappearing importance of lies
has long been used not only in politics. Precisely because it is less dense, a lie does
not behave like the truth or like physical reality.
October 2024 (Original ≽)
https://rvukovic.rs/bloge/2410.html
20/21
8/3/24, 3:18 PM
Copyright © 2002 - 2024 Rastko Vukovic
https://rvukovic.rs/bloge/2410.html
Blog, October 2024
Template by OS Templates
21/21