# Thread: Is almost infinite non-linear memory possible?

1. Currently almost all computers use linear type of memory - the more data you want to store the more physical memory you have to use. But there is some ideas that non-linear type of memory could be created which allows almost infinite memory capacity. In the basis of this idea is that all processes and events that surround us are continuous and are perceived as uninterrupted impact on us. If we create a memory device in which every memory cell state would be functionally dependent on all other cells state. Then each impact on memory would affect all the memory or it significant part. Father if number of memory cells of such memory will be much larger than each particular impact than this system will start to behave non-linearly. If we will receive signal on the input of such memory it will all change according to well known function - the signal is recorded. If we want to replay the data we need to define only the starting point from which reproduction have to start and according to well known function start reading the data. Also if we will change a speed of data reading we will be able to replay any segment of continuous data with very high speed.

There are suggestions that after number of cells of such memory will reach certain amount, non-linearity will become very high and memory almost infinite.
Also there are speculations that some living creatures including humans could have such type of memory.
Currently only small amount of memory storages have some type of rudimentary non-linear behaviour such as use of "delta function" or similar where recording of continuously changing function occur.

I'm not sure that understood everything correct about idea, so maybe there are some suggestions on it?

2.

3. Originally Posted by Stanley514
There are suggestions that after number of cells of such memory will reach certain amount, non-linearity will become very high and memory almost infinite.
To aid my understanding of your post, could you give a link to these suggestions? I'd like to get a better idea of what you are describing.

Also, and this is just the mathematical purist in me, can you say what you mean by almost infinite? No finite number, no matter how large, is "almost" infinite. Do you mean unlimited by, say, the number of atoms in the universe? Or unlimited by linearity? Or just really really big?

Your post is right on the border between cranky and potentially interesting; and I'm trying to get a nudge one way or the other.

4. To aid my understanding of your post, could you give a link to these suggestions? I'd like to get a better idea of what you are describing.
This is translation of the article in English. Since google translate is far from perfect, not sure if it could be useful:

Almost unlimited memory.

Let's start with the fact that all the processes around us, events are continuous in the majority or perceivable by us as a continuous effect on us.
In the usual sense of the recording process on any of the carriers is characterized by strict compliance exposure (argument X) on record (function Y): Y = f (x), and strict compliance playback - the inverse function (1 / f):
X = f (-1) (y).
In practice, the use of memory devices (memory) is used in the technique of direct correspondence of the recorded signal (function (Y)) from the impact - the signal source (the argument (X)). A data stream corresponds to the recorded signal data stream source. And, the more similar the data stream recording the data stream source, so we get a more faithful reproduction with the inverse function (playing records).
Omitting such issues as compliance with the source - record introduced distortion, etc.., Just to say that any type of media (memory) distorts larger or smaller, objective or subjective.
Existing storage devices (memory) for the most part are linear: the more information you must remember, the larger the memory to be used. There is only a small part of the memory, which uses the so-called delta-function or the like, where the change is stored continuous function. But with a visible advantage in a much smaller amount of memory (not memorized the entire data stream source, and only a change) and present a significant distortion of the signal at its subsequent playback. However, increasing the sampling rate, can achieve better similarity.

The recording process - process play

Now imagine a memory in the form of a black box with unlimited, where you can write any amount of information. It is very important to know the function in which the state of the memory is changed in accordance with the exciting influence:
Y (ZU) = f (x (excitation))

Knowing this feature and clearly describing all its parameters can easily reproduce the recorded signal excitation. Hence, we formulate the first task: to create or describe the function defining a clear line of direct and inverse transform.

Try to understand, what it should represent the "black box" - the memory with an infinitely large memory. Linear memory is always finite - the more you need to record the information, the greater the need amount of memory. Spatial resolution (in volume) positioning memory to the coordinates X, Y and Z - a more compact solution, but also an option of a linear function.
By creating such a spatial memory, we introduce the functional dependence of each cell, the memory cell of the state of all the others. This yields a "black box" - memory, which varies with the exciting effects of all or most of it.
When the memory capacity is comparable with the exciting influence, such a system is also linear, but let's consider the case when the number of memory elements is much larger than required for a single excitation. Here, such a system will behave no longer linear:

memory Capacity

Y

Z
The linear portion of the nonlinear part

Number of cell-memory elements 0 X

Received signal at the input of the "black box" - the state of the entire "black box" varies according to the well-known function in advance - there is a record of the input signal. You want to play: define only a starting point from which to start, and known in advance of the inverse function reads the recorded information. Moreover, changing at will, or need, the playback speed can "lose" any length of continuous information in a very short period of time.

Hence, we formulate the second problem: Determine the volume of interdependent spatial memory in which will start to show the non-linearity and non-linearity which reaches the level at which an increase in capacity would be inappropriate.

When comparing biological creatures, with their spatial organization of the brain as a "black box", we can assume that there could be such a memory organization in living things. Upon reaching a certain level, the brain volume - a living organism becomes the owner of almost infinitely large memory. The amount of memory at the same time becomes significantly overweight, respectively, it is possible in-depth analysis, and the seeming illogical solution - ie Creativity.

5. Originally Posted by Stanley514
This is translation of the article in English. Since google translate is far from perfect, not sure if it could be useful:
Thank you for the translation. I'd be grateful for a link.

As far as Google translate goes ... if that's the future of artificial intelligence, the human race has nothing to worry about. As a frequent user of Google Translate, I'd like to see a decent translation out of that thing before I they let Google software drive cars :-)

ps -- I haven't finished reading all of this yet, but I did note that you said, "Now imagine a memory in the form of a black box with unlimited, where you can write any amount of information." and proceeded to draw a conclusion. But it seems to me that you are already assuming the thing you want to prove.

I also noted early, in the very first sentence:

"
Let's start with the fact that all the processes around us, events are continuous in the majority or perceivable by us as a continuous effect on us."

That is patently false. As a counterexample, it's known that when you go to the movies, the film is shot at 24 frames per second; because the early filmmakers determined that humans perceive 24fps as continuous motion. When you play an action video game, the motion seems continuous but the computer is only rendering individual frames, one after another.

This is true of all our senses. Our nerve cells and skin receptors have discrete thresholds below which they do not fire. Our eyesight, hearing, senses of smell and taste; all work the same way. If one photon hits your retina, you don't see anything.

So, your philosophical premise is false; and your proof utilizes a magic box that already has the property whose existence you wish to prove.

I vote crank. But I'd still like to see the link. I love crankery. I don't even mean to be disparaging. I'm just expressing my opinion based on a cursory look.

6. In answer to the question in the title, no. There are physical limits on the amount of information you can fit in to a given volume. As an illustrative example, consider how many yes/no answers can you theoretically store on a single atom?

Also, driving a car is much easier than speaking a language fluently. (It just doesn't seem that way since our brains are hardwired to process language, but think of how many more people drive compared to how many learn a second language.)

7. Originally Posted by MagiMaster
Also, driving a car is much easier than speaking a language fluently. (It just doesn't seem that way since our brains are hardwired to process language, but think of how many more people drive compared to how many learn a second language.)
Yes I thought of that. Natural language processing is much different than driving a car. I'm sure someone with a theoretical CS background could probably place them in different complexity classes.

But still ... I'm thinking of the last time I drove through the San Francisco financial district on the way south out of town at rush hour. The amount of judgments -- not just observations, but value judgments -- that you have to constantly make is not generally recognized. "I need to get over to the right. Check the driver of the car I need to have let me in. (a) He looks mean and distracted. Nevermind, I won't make my move, I'll go around the block instead. Or (b) He looks friendly and aware and he just slowed his car and waved me in. I smiled and waved back and made my move."

In city traffic you make these judgments all the time. You are not just calculating the distances and speeds of the cars around you. You often make eye-contact or wave-contact with drivers. Or you see someone driving aggressively so you alter your own behavior as to avoid them.

I think the facial recognition of other drivers, the human cooperation out there, is not acknowledged by the proponents of computer-driven cars. I think the idea will fail in any busy urban area. That is my opinion.

Driving a car is a much harder problem than people realize. Maybe not quite as hard natural language translation. But Google Translate is really terrible. And it's made by the same company. So I am making a prediction here and I think we'll all have to wait a few years to see how this works out. I say self-driving cars will fail in city traffic. The problem is much more than physics.

Of course this is an opinion. States are starting to legalize these things. I'm sure our wise leaders know best. Stay tuned :-)

8. Once self driving cars outnumber the human driven cars (which I agree won't be soon) then there won't be the same kind of aggressive driving to avoid. I don't think that the people making these cars are ignoring or overlooking such problems though. They may not talk about them much in press releases, but they're certainly things that need to be taken in to account while driving. Although I would argue that facial recognition is not important. I never look at another driver's face while driving. I mainly pay attention to how they're driving instead. (I do see when they wave though.)

I predict that self driving cars will start out with the option to switch modes and most people will probably use them more on longer, boring stretches of road, but as they become more popular and better programmed, they'll eventually take over most roads. Again, once they reach critical mass, the way traffic works will change. Since the cars will be able to communicate with each other, they can negotiate lane changes and the like in a few fractions of a second, unlike humans. They also have better reaction times, so they can safely follow closer meaning more cars can fit on the roads or average speeds can increase. The self driving cars would almost certainly give any human driven cars a wide berth since they wouldn't be able to as-accurately predict its movements.

It's really only a matter of time though. While some people drive for fun, most people would gladly give up their daily commute to be able to get in a nap (or some work) on the way to their job. It's somewhat similar to telecommuting in that respect, although there are still plenty of reasons to actually physically go to work.

9.

10. Simple schemes of non linearity will reuse existing data and add at a lower rate by indexing that.
True non-linearity would necessitate re-encoding previous data to encompass the new data, thereby forgoing some of the ever growing indices, but suffering the time to re-encode possibly all previous data.

11. Simple schemes of non linearity will reuse existing data and add at a lower rate by indexing that.
True non-linearity would necessitate re-encoding previous data to encompass the new data, thereby forgoing some of the ever growing indices, but suffering the time to re-encode possibly all previous data.
I have no idea, but suspect what they mean is an analog memory in which any memory cell could obtain any value between x and y rather than digital. So when new data is recorded the values for the rest of memory is smoothly changed on a way rather than completely rewritten.

12. Originally Posted by Stanley514
Simple schemes of non linearity will reuse existing data and add at a lower rate by indexing that.
True non-linearity would necessitate re-encoding previous data to encompass the new data, thereby forgoing some of the ever growing indices, but suffering the time to re-encode possibly all previous data.
I have no idea, but suspect what they mean is an analog memory in which any memory cell could obtain any value between x and y rather than digital. So when new data is recorded the values for the rest of memory is smoothly changed on a way rather than completely rewritten.
None of those ideas overcome the physical limitations I mentioned.

If you overwrite existing data, you have destroyed that old data and you cannot say you've increased the memory. If I have just enough space to store the answer to one yes/no question, and then I store a second yes/no answer in the same space, I can no longer answer the first question and I still only have one answer stored.

If you're talking about data compression, there are limits on that as well. If I store two completely unrelated yes/no answers, there's no way to fit that in to less than 2 bits of information. (If you know something about the answers you're storing, you can sometimes reduce the average size to less. For example, if you know that the answer to one of those questions is always yes, there's no point in storing it at all. Either way though, you cannot reduce it below a known limit.)

Analog memory cannot store any value between x and y. You still need to be able to distinguish two different answers and there's plenty of noise to blur two answers that get too close together. (And no, you cannot remove all the noise.) Besides that, when you get small enough, you run in to quantum mechanics and the simple fact that any flow of electrons is just composed of a huge number of individual particles. Each of particles can only store a finite amount of information.

13. "Besides that, when you get small enough, you run in to quantum mechanics and the simple fact that any flow of electrons is just composed of a huge number of individual particles. Each of particles can only store a finite amount of information."

Yeah. As an interesting side note, I've heard that a given qubit actually theoretically stores an infinite amount of information about how it was prepared, but the catch 22 is that you can't see it, because you collapse it to one of two states as soon as you observe it. By preparing a large amount in the same orientation, you can start to recover that information by observing lots of them and recording the probabilities of collapse, but that puts you right back in the first problem: lots of qubits means lots of space.

14. Originally Posted by TridentBlue
"Besides that, when you get small enough, you run in to quantum mechanics and the simple fact that any flow of electrons is just composed of a huge number of individual particles. Each of particles can only store a finite amount of information."

Yeah. As an interesting side note, I've heard that a given qubit actually theoretically stores an infinite amount of information about how it was prepared, but the catch 22 is that you can't see it, because you collapse it to one of two states as soon as you observe it. By preparing a large amount in the same orientation, you can start to recover that information by observing lots of them and recording the probabilities of collapse, but that puts you right back in the first problem: lots of qubits means lots of space.
I don't know if you can really say a qubit holds an infinite amount of information though, but I suspect that answering that question would require defining precisely what you mean by information.

 Bookmarks
##### Bookmarks
 Posting Permissions
 You may not post new threads You may not post replies You may not post attachments You may not edit your posts   BB code is On Smilies are On [IMG] code is On [VIDEO] code is On HTML code is Off Trackbacks are Off Pingbacks are Off Refbacks are On Terms of Use Agreement