In a waking day of 16 hours, you are conscious for 960 minutes.  If a “memory event” is something in the neighborhood of 10s, that’s something like 5000 opportunities for a memory per day.  How many memories do you have from yesterday?  If It’s in the neighborhood of 100, that’s a memory storage fraction in the range of 1-2% of experience.  The experience of remembering episodes often feels somewhat sequential in that we can remember what happened after something else we remembered.  So possibly the storage fraction is higher and we’ll need a better model of the retrieval process.  But whatever the fraction is, it reflects some small slice of our experience.

The simplest possible model of memory storage (encoding) is a simple stochastic function with a probability of occurring of something like 1:50 or 1:100 (1-2%) for each event you experience.  That is, you are experiencing events in a relatively constant sequence and the specific neurobiological processes of the MTL that create a new memory trace/item occur stochastically through your day, laying down a random subset of memories.

What falsifies this model?

The ability to “study” and remember somethings better does not necessarily counter the model.  One basic element of study is to direct attention to the desired material and repeatedly engage with it.  A purely stochastic model will end up with a greater fraction of studied memories simply as a result of this repetition/attention process.

Emotional memory, especially negative emotion, does indicate that memory creation is not purely stochastic as those memories associated with negative experience are more powerful and more long-lasting.  However, the effects of positive experience are a lot more subtle.

Are there any other neurocognitive processes that affect the storage rate/fraction?

One thing wrong with the purely stochastic model is that it doesn’t allow for strong/weak memories, or the process of decay.  We generally use the term “consolidation” to refer to the temporally extended process of memory change that causes memories to generally become weaker (on an exponential/power-law curve) but some memories to become durable and permanent (and non-MTL dependent).

Where is the stochastic process occurring?

A simple neurocognitive model of memory is that information comes into awareness through perception, modified by attention, held briefly in working memory and then becomes a MTL-dependent representation that can become a long-term, durable memory.  Since awareness/working memory seems to be constant, the transition to the MTL is where the probabilistic nature enters.

It’s difficult to tell directly if the failure point is the transfer from WM to LTM, or if everything is at least briefly in the LTM but then fails to consolidate.  The available indirect evidence slightly favors the latter.  The availability of information that is extremely recent, but supra-span in WM suggests that representations are available somewhere (e.g., the MTL) to re-enter WM.  Replay findings in animals that show re-creation of activity patterns in the MTL that mirror experience also argue for experience traces to all be present at some point in the MTL.  Default mode activity activating the MTL fairly reliably is additional weak evidence in this direction.

We can refine the simple stochastic model to one where everything is at least briefly reflected in MTL activity.  Perceptual representations are held transiently in WM circuits, which are then echoed in MTL activity patterns.  We therefore presume that a neurobiological process stochastically occurs that triggers a consolidation process for some small fraction (1-2%) of these representations, some smaller fraction of which then go on to be durable, very long-term memories.

Other than attention/repetition to increase the probability of specific information getting into the consolidation pipeline, what other factors might increase this probability?

The ability to hook onto or connect with existing memories might increase the probability of successful consolidation.

Spacing/reactivation of existing memories might influence this process as well — although since no two experiences are exactly alike, this may effectively be the same thing as connection to existing memories at a neural level.

We should also note that some direct reactivation processes are known to be maladaptive, e.g., remembering where you parked today needs to be separated from prior parking events.  In theory, there’s a pattern separation process trying to help establish that, which would have to be operating in tension with a connection/reactivation process.

The CLS model captures the pattern separation/consolidation tension by positing that memories are initially separated via a sparse-firing pattern mechanism and then a temporally extended, gradual consolidation process incorporates these back into an integrated long-term memory.  We have noted that the sparse representation mechanism is likely to be low-capacity, potentially creating that bottleneck that causes the 1-2% storage rate.  The eventual distributed representations in the LTM store probably have larger capacity than can be filled in a lifetime, but does not contain all of our memories due to the bottleneck, which then causes the effective stochastic/probabilistic nature of what small fraction of memories get created.

A problem with this model is that the neural patterns that are initially sparse and eventually distributed are very different, making it hard to see how prior knowledge could affect novel storage events.  How do we think reactivation, connection and association of memories works when there are completely different neural representations of knowledge?

Being able to answer that question might theoretically suggest approaches for engaging with information to be memorized (learned) that would be particularly applicable to education or training, where the goal is to acquire specified content sets of related information.  It might also provide some insight into successes or failures of everyday memory, which do not actually feel like they are totally stochastic.