I've had this idea for a while, but haven't built one. Essentially, the common version of the universal approximation theorem states that we can arbitrarily well approximate any function given that we have unlimited linear operations and a nonlinearity of choice.
Signals in optical fiber can be filtered, meaning their signal is decreased by an arbitrary percentage. They can be split and merged, and non-linear phenomena like the Kerr effect exist in practice (https://www.nature.com/articles/s41467-023-41377-5). My suspicion is that this is enough to arbitrarily well approximate any function up to a scalar constant, and can be proven by taking standard neural network construction and simply multiplying the output down by a scalar factor at each layer in order that the 'increasing' components in an output stay the same or lower magnitude at the output.
What you would get in practice is a neural network capable of evaluating instantly and with almost unlimited throughput at no-cost. Particularly with CNNs and other image ingestion tools I can see this working very well.
Fiber-Net ramble
Subscribe to:
Post Comments (Atom)
March thoughts
Lets start by taking any system of ordinary differential equations. We can of course convert this to a first order system by creating stand-...
-
A couple days ago I encountered a really neat solution to creating a basis using a Cholesky decomposition. Particularly, when we have a set...
-
Given a vector $\vec{x}$ with unit magnitude, we want to find a $m \times n$ matrix B such that $BB^T = I$, with first row $\vec{B}_0 = \vec...
-
Boring setup: Suppose we have a probability mass function which we choose to represent as a vector $\vec{p} = \{ p_0, p_1, p_2, \cdots p_...
No comments:
Post a Comment