SNERX.COM/MATH Last Updated 2023/11/4 • Read Time 22min • Discord ______________________________________________________________________________________ I never learned math in school, so as an adult I have been trying to teach it to myself. The things I have found on this page are probably wrong, and the things I am not wrong about were likely discovered long ago. This is just for fun. Infinite Series & Randomness: If you have an infinite series of whole numbers from one to positive infinity, and you randomly select a number from that series, then the length (number of digits) of that selected number will also be infinite. That is, the average length of the series 1-9 is 1, since each number in the series is 1 digit long, but the average length of the series 1-99 is closer to 2, and the average length
of the series 1-999 is closer to 3, and so on, trending towards infinity. So the average number of digits for numbers in a series of integers from one to infinity is infinite. An average means that roughly half the numbers in the series are longer than the other half, and roughly half are shorter. But half of infinity is still infinity, and half of that is infinite still, thus all the numbers available to pick must be infinite in length. Of course, no natural number can be infinite in length, so we have a problem. Conversely, there is no actual randomness in infinity. If you are to randomly pick a number from one to ten, each number has a one-out-of-ten chance of being picked, but if you are to randomly pick a number from one to infinity, each number has infinitely low odds of being picked. I posit that since these odds asymptotically approach zero, the odds of randomly picking any number out of an infinite series is actually zero, meaning you simply could not pick a number. As further explanation, James Torre pointed me to the impossibility of a measure for the Naturals for why this random selection fails.
Goldbach's Conjecture: If not already familiar, read this. All even numbers are predicated off of 2, and 2 is a prime, so all reduction of numbers predicated by 2 can be reduced to primes of the predication value, which in this case is 2 (so 2 primes). The generalization then is such that any member of a factor-tree based on some number X will inherit properties of X and will be reducible to an X-unit-count that share at
least one of those given properties. The way this looks for the existing conjecture: Every number factorable by 2, past 2, can be expressed by 2 units of some property of 2 (namely primehood). The way this looks for new conjectures: 1 — Every number factorable by 1 (all whole numbers), past 1, can be expressed by 1 unit of some property of 1 (namely empty product or unity/identity). 3 — Every number factorable by 3, past 3, can be expressed by 3 units of some property of 3 (namely merseene primes or fermat primes); effectively the weak conjecture. π — Every number factorable by π, past π, can be expressed by π units of some property of π (namely irrational numbers or transcendental numbers). 2 — Every even integer greater than 2 can be expressed as the DIFFERENCE of 2 primes. The last example is an augmented version of Goldbach's Conjecture showing that unit relations are arbitrary so long as the relations are of non-arbitrary units themselves. The point of this is to show that there is a generalized form of the conjecture that lets us posit any number of equally plausible relations/patterns, and if Goldbach is right about his conjecture, then so too must all the other conjectures of the same form be correct.
Serious Problems for Cantor's Diagonalization: The conclusions drawn here are that Z and R are not of separate cardinalities. There are three issues I've found with diagonalization while trying to do the metalogic for it. The first occurs when swaping the lists; if you put the natural numbers in a list denominated by the real numbers instead of the other way around (the way Cantor does it), you attain the same
outcome of new infinites, meaning the conclusion would be that the set of natural numbers is larger (and uncountable) than the set of real numbers (which are then the countable set), a contradiction since this set contains the natural set. This is of course the opposite of what Cantor concluded. To demonstrate this, look at the following images modified from Veritasium's video Math Has a Fatal Flaw: Above, you see all I have done is swapped the natural index numbers on the left with the list of real numbers on the right from 0.0 to 1.0. The randomized list of natural numbers is enumerated just the same in the right list as the real numbers are in the left list. All we do now is apply the diagonalization technique Cantor uses on the new list the same as the old list, shown below. What you see here is that the new natural number we generate from the diagonalization method similarly 'does not appear in the list', much the same as the new real number Cantor generates from the diagonalization method. My critique is directly addressed here and the above issue is considered a non-problem because natural numbers cannot be infinite in length. However, it's not impossible to have infinitely long numbers since we do have some numbers that are infinite in length, like the decimals used in the set of real numbers listed by Cantor, and further, the first section on this page about infinite series and randomness suggests we could have infinitely long natural numbers. So it's hard for me to consider this issue a closed matter. Had Cantor started with indexing the Reals instead of the Naturals, he would have concluded that the infinite set of natural numbers was qualitatively larger than the set of real numbers. I believe he avoided doing it this way for the reason in the last paragraph — that natural numbers are finite in length. Even if the above is a non-problem, it is only one of the three issues I have found with diagonalization. For the second issue, I believe his derivation of different cardinalities is due to his mixing of potential infinity with actual infinity, something already known to be improper in metaphysics as far back as their discovery by Aristotle. Whether you use my reversed list or Cantor's original list, both ways require the use of potential infinity for the index numbers and actual infinity for the numbers enumerated to the right of the index. So of course this would appear to be different kinds of infinity, because you have begged the question and baked a pre-supposed conclusion into the formulation of its proof. The third issue with Cantor's diagonalization is that it requires the ordering of his real numbers to be random, as we shall see that reordering the list from smallest to largest real numbers demonstrates the diagonalized new number in fact already appears on the list. If Cantor's list was in order from smaller to larger infinite decimals, we would see the list ordered as 0.111, then 0.112, 0.113, and so on. If we then applied diagonalization we would get 0.2 from the first real, 0.02 from the second, 0.002 from the third, and so on, resulting in 0.22 as our newly generated real number. But of course, when ordered from smallest to largest decimal, 0.22 would appear later in the list. I contend that if you don't believe I applied Cantor's diagonalization properly, then you have not been careful to note the kind of infinities I used. In my prior example, I do not mix potential with actual infinity, and thereby the list of 0.11n's we enumerated earlier results in an infinite number of reals that lead with 0.11, meaning we generate an infinite series of 2's. If you think this problem is resolved by baking the idea of uncountability back into real numbers, then simply swap the naturals with the reals as I did in the pictures earlier and you'll see the problem I just described reoccurs. So even if one or two of these issues can be resolved, the third will re-assert itself, making the notion of distinct cardinality between sets of infinites in math an untennable position. If I am horribly wrong about this, and I know I am, I appreciate someone who knows better taking the time to explain to me how this actually works.
Separate Cardinalities of Zero: After thinking a lot about Cantor's infinite sets, I started to notice that infinity and zero share a lot of properties in common. You can multiply or divide infinity by any number and it is still infinity. You can multiple or divide zero by any number and it is still zero. Infinity and zero are both non-numbers since the first is a non-finite quantity and the second is the lack of a
quantity. Both represent limits of calculation since infinity is an upper bounds of quantity that cannot go any higher and zero is a lower bounds of quantity that cannot go any lower. Further, dividing any number by zero returns infinity and vice versa dividing any number by infinity returns zero. There are other relations they share but I think you get the point. Given the similarities, I wondered, despite my complaints in the prior section, if Cantor is right about the difference in kinds of infinites, then maybe there is also a difference in kinds of zeros. You would think we couldn't use the same diagonalization method for zeros that Cantor uses for infinites since infinite lists can be enumerated by all the real numbers and zero is just one zero, but by using infinitesimals like 0.001 or 0.0099, we have an infinitely enumerable set of zeros we can index and apply diagonalization to just the same as Cantor did. The objection to this would be that infinitesimals are not distinct values and do not qualify as being distinct in identity, however that is only true if the next section is wrong. So depending on how the next section works out, we may have a means to prove that there are distinct cardinalities of zeros. But I have no idea how far off base any of my ideas are without talking to real mathematicians, so I'm probably wrong about everything anyways.
Multiple-Infinite Decimals: 1/3 = 0.33, and 2/3 = 0.66, but what numbers out of a whole give us the other repeating decimals? If we wanted 0.66 out of 1 whole instead of 2, we get the following: ____________ 1 / x = 0.66 1 = 0.66 • x x = 1 / 0.66 x = 1.5 ‾‾‾‾‾‾‾‾‾‾‾‾ But I contend that this number is actually 1.50015. Why and how? Dividing 1 by 0.666,
we get 1.5015, by 0.6666 we get 1.50015, by 0.6666666666 we get 1.50000000015, and so on. By 0.66 what we get is an infinite series of infinites, namely the infinite bar between 15's, that is 1.50015001500, repeating. This is the same as saying 1.5 with an infinite series of zeros following it, and then after infinite zeros there is a 15 followed by another infinite series of zeros, and so on. Another way of saying this is that as the antecedent (divisor) grows in decimal length, so too do the number of zeros between the numbers of the decimal of the consequent (quotient). Therefore with an infinitely-repeating-decimal divisor you get infinitely repeating zeros followed by a finite series of numbers, the set of which itself then infinitely repeats, in the quotient. I've had people argue with me that, "This is not how fractions work," and if we were using whole-number fractions, they would be right, as one divided by two-over-three becomes three over two and then cleanly resolves as one and one-over-two. But we aren'tconcerned with whole-number fractions here; the property I describe shows that thenumbers in decimal format are not 'cleanly' divided. 1 divided by 3 gets you 0.33, but as now described, 1 divided by 0.33 does not seem to get you 3. The closest thing to this I have been able to find online are Shanks' numbers, but those are distinct in scope and application, so I call these other numbers Snax's Bar Numbers. I have written out some of the bar numbers below so you can see their weird properties. 0.11 is 1/3 of 0.33 so 3 by 3 should mean 9, and in fact we see that 1/9 does equal 0.11. ________________________ 1 / x = 0.11 1 = 0.11 • x x = 1 / 0.11 x = 9.00900900 repeating ‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾ This means 900 is the bar number attained from 0.11. To re-iterate why this happens we can follow the non-bar series, which results in: __________________________ 1 / 0.1 = 10 1 / 0.11 = 9.0909 1 / 0.111 = 9.009009 1 / 0.1111 = 9.00090009 1 / 0.11111 = 9.0000900009 ‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾ You keep adding zeros per n decimals of 1 from here, ultimately giving us the bar number we attained, 900, for the series of 0.11. A list of these follows. For 0.11 we get 9.009, or 900 as the bar number. For 0.22 we get 4.50045, or 4500 as the bar number. For 0.33 we get 3.003, or 300 as the bar number. For 0.44 we get 2.2500225, or 22500 as the bar number. For 0.55 we get 1.80018, or 1800 as the bar number. For 0.66 we get 1.50015, or 1500 as the bar number. For 0.77 we get 128571428571428571428700, as the bar number (see below). For 0.88 we get 1.125001125, or 112500 as the bar number. For 0.99 we get 1.001, or 100 as the bar number. N.b., 0.11 and 0.99 are inverses of each other but there are no other inverses. Notice also the strangness of 0.77's bar number and how no other bar creates the same level of noise so far. 0.77 is more dynamic and there appears at first to be no upper bounds on the series length or mutations; it does resolve, but I don't know how many digits out it takes to resolve since I only tried up to 12 and then skipped to 30. __________________________________________________________________________________________________________ 1 / 0.7 = 1.42857142857 1 / 0.77 = 1.29870129870 1 / 0.777 = 1.28700128700 1 / 0.7777 = 1.28584287000128584287000 1 / 0.77777 = 1.28572714298571557144142870000128572714298571557144142870000 1 / 0.777777 = 1.28571557142985714414285842857271428700000128571557142985714414285842857271428700000 1 / 0.7777777 = 1.28571441428572714285842857155714287000000128571441428572714285842857155714287000000 1 / 0.77777777 = 1.28571429857142870000000128571429857142870000000 1 / 0.777777777 = 1.28571428700000000128571428700000000 1 / 0.7777777777 = 1.28571428584285714287000000000128571428584285714287000000000 1 / 0.77777777777 = 1.28571428572714285714298571428571557142857144142857142870000000000 1 / 0.777777777777 = 1.28571428571557142857142985714285714414285714285842857142857271428571428700000000000 ‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾ This is strange since it resolves to an infinite period in the bar number and further there are two infinite sequences within the general infinite sequence (demarcated by the double-bar). I believe this also serves as proof that the numbers past the infinite repetitions are non-trivial since there is not an infinite sequence of '00' at the tail of 142857, but rather '14287' instead. The bar number for 0.77 does not have a finite period length, even though all the others do. Fractional divisors in equations for physical systems that result in very fuzzy statistical outcomes could probably be cleaner by exploiting this property since infinite values appear often in those systems. As proof for this, look to the convention that the number 0.001 is equal to 0. Since there are different kinds of infinities in math, not including my objections to the counter in a previous section on this page, the countable infinity in the number 0.001 will be overcome when divided by a number that is an uncountable infinity. The infinite part of 0.001 can be skipped over by an uncountable infinity, leaving the 1 at the end as a non-arbitrary part of the divisor. This makes what I've been calling the 'bar' numbers above worth considering for application in cleaning up infinites. "But what about 0.12, or 0.69, or 4.20?" Most of the numbers I've looked at don't result in much of anything interesting, e.g. if we look at 1.11 we get 0.9009 (or 900), which is in line with what we've already seen. However, some numbers have unique properties, like 1.22, which resolves as follows: _________________________________________________________________________________________________________________________________________________________ 1 / 1.2 = 0.833 1 / 1.22 = 0.819672131147540983606557377049180327868852459016393442622950 1 / 1.222 = 0.818330605564648117839607201309328968903436988543371522094926350245499181669394435351882160392798690671031096563011456628477905073649754500 1 / 1.2222 = 0.818196694485354279168712158402880052364588447062673866797578137784323351333660612011127475045000 ‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾ So what is the bar number for 1.22? This one is weird to me because the bar number does not seem to be static. The number we get changes depending on if the infinite series of decimals is even in length or odd in length. If the infinite series of decimals for 1.22 is even in length, then the bar number resolves as 819669421... and if the ifninite series of decimals for 1.22 is odd in length, then the bar number resolves as 818330578.... In both cases, WolframAlpha suggests there is a repeating period length for the sequence that follows 81 but the complete sequence is infinite and so WolframAlpha does not say what that period length is. The series' period that follows 81 grows rapidly as you include more decimals and the same number that follows 81 resolves granularly to a definite series, yet is assumedly infinite in length at its absolute resolution. The fact that the number it resolves to alternates its determination dependant on how the bar 'feels' as a function of even-ness or odd-ness of the infinite decimal series for 1.22 is strange in itself but what makes this more challenging for me is that a 'true' calculation of 1 divided by 1.22 could not really resolve to any number since the series could not be determined in finite time. I need someone much smarter than me to explain this. We have of course only looked at repeating decimals divided out of 1, and could go through the same infinite list of decimals and divide them out of 2, or 94, or π, and get new infinite lists of bar numbers, most of which would probably never be touched or be useful to anyone or anything. But I think it's neat.
Invented, Not Discovered: Numbers don't real and Platonism is a cope. Mathematicians and physicists alike can't fathom why an invented description of the world would give accurate predictions of it, despite almost every other framework we invented also giving accurate predictive power in their relevant domains (like economic, political, and psychological theory, etc.), so they stand in awe of mathematics' predictive power
like feeble-minded infants and loath basic questions like, "Why is math true?" or, "What is it's subject of study?" Peak cope looks like this, where established scientist Max Tegmark says in more or less words, "All frameworks of mathematics are true, there's a place where Euclidean space is real, just not here." At the very beginning of that video he also says we aren't free to invent a sixth regular 3D shape because it doesn't exist, however you absolutely are free to invent one. People did this with imaginary numbers, we could easily do it with imaginary regular shapes and variable the number of sides each flat surface has, kinda like analytic continuation but applied to basic geometry. The problem is even worse than I am making it seem. Consider that discoveries map one-to-one on the world, e.g. if I have discovered a hidden chamber below the Sphinx, then I have discovered a hidden chamber below the Sphinx — I have distinctly not discovered a room floating in space several lightyears away from the Sphinx. A discovery about X returns X, not not X. Discoveries don't return contradictions. However, when asking two different frameworks of mathematics the same question, they return different answers. Let's take two frameworks that are supposedly 'objective' and ostensibly even about the same exact kind of phenomenon, like Leibnizian Calculus and Newtonian Calculus. If you were to ask these two frameworks of calculus where the center of mass of the Moon will be in one day's time, they will return two different points in space to you. This is saying the Moon will be at X and not X at the same time in the same regard — a contradiction. Since discoveries don't return contradictions, one or both of these mathematical frameworks has to be invented. Contrast math (also known as second-order-logic) with formal logic (first-order). Here we start to see what's really going on. Formal logic only has one framework, and that's the one Aristotle discovered 2400 years ago, remarkably unchanged and more remarkably unappended since then; Aristotle completed the discipline. Sure, there are different ways people try to symbolize logic as predicate, existential, modal, and so on, but the rules of logic are finished, and they're finished in a single package. The same cannot be said about math because math isn't an actual component of reality. My real thoughts on the ontological status of math are a lot more charitable than what I've said here, but it's worth noting that even if one of the frameworks of mathematics we currently have ends up being discovered rather than invented, it is still the case that all the others weren't.
Circles: I have been told that it is valid in math to have a circle with diameter zero. But, does this mean that a circle with radius zero is half as small as a circle with diameter zero? It seems to me that a circle with a diameter of zero would also have a radius of zero, since half of zero is still zero. And if this is all true, then it follows that this circle must also have a circumference of zero. My understanding
of a 'point' is that points have no diameter, no radius, no circumference, and so on, so why is this circle not just a point? Unrelated, if we remade the unit circle at base 100 instead of base 360, the numbers work out better for normies and base 100 means you can use percent values to ascertain positions on a grid. This is much easier to visualize mentally for most people. This also makes it possible to give %-%-% formated coordinates for points in 3D space. Again much easier to cognize than using 360 or π. This also turns the unit circle into an actual single unit since 100% equals 1 whole, instead of "2 units of π," which by it's very description is not a unit circle but a two-unit circle. By happenstance this is also better for relativistic frameworks used in mapping galaxies. A system like this gets used sparsely in some of the games we've dev'd on Snerx. If you want to look at other dumb shit we've done with percent-based relativistic frameworks you can check some of those out here.
Tarski & Gödel: I wish to attack asepects of the incompleteness theorem and show that since set theory is not a properly formal part of first-order-logic, that its implications in second-order-logic are also malformed, pulling from the undefinability theorem to justify this, as well as recursively seating the incompleteness theorem inside itself to invalidate itself, showing internal inconsistency similar to a
halting problem, but it will be a while before I get to writing this out so this is a placeholder section until then. I want to also make a physical computational implementation of the halting problem to help illustrate this issue, and that will be added to the /engineering page when that happens.