I've had this idea for a while, but haven't built one. Essentially, the common version of the universal approximation theorem states that we can arbitrarily well approximate any function given that we have unlimited linear operations and a nonlinearity of choice.
Signals in optical fiber can be filtered, meaning their signal is decreased by an arbitrary percentage. They can be split and merged, and non-linear phenomena like the Kerr effect exist in practice (https://www.nature.com/articles/s41467-023-41377-5). My suspicion is that this is enough to arbitrarily well approximate any function up to a scalar constant, and can be proven by taking standard neural network construction and simply multiplying the output down by a scalar factor at each layer in order that the 'increasing' components in an output stay the same or lower magnitude at the output.
What you would get in practice is a neural network capable of evaluating instantly and with almost unlimited throughput at no-cost. Particularly with CNNs and other image ingestion tools I can see this working very well.
Fiber-Net ramble
Subscribe to:
Post Comments (Atom)
March thoughts
Lets start by taking any system of ordinary differential equations. We can of course convert this to a first order system by creating stand-...
-
Here's some code to generate those basis functions from yesterday. As you might imagine, you need to increase the decay rate as you ask ...
-
A couple days ago I encountered a really neat solution to creating a basis using a Cholesky decomposition. Particularly, when we have a set...
-
We hold the following truths to be self-evident (for real-matrices): $$\displaystyle A = U \Sigma V^T$$ $$\displaystyle A^+=V \Sigma^{-1} U...
No comments:
Post a Comment