I've had this idea for a while, but haven't built one. Essentially, the common version of the universal approximation theorem states that we can arbitrarily well approximate any function given that we have unlimited linear operations and a nonlinearity of choice.
Signals in optical fiber can be filtered, meaning their signal is decreased by an arbitrary percentage. They can be split and merged, and non-linear phenomena like the Kerr effect exist in practice (https://www.nature.com/articles/s41467-023-41377-5). My suspicion is that this is enough to arbitrarily well approximate any function up to a scalar constant, and can be proven by taking standard neural network construction and simply multiplying the output down by a scalar factor at each layer in order that the 'increasing' components in an output stay the same or lower magnitude at the output.
What you would get in practice is a neural network capable of evaluating instantly and with almost unlimited throughput at no-cost. Particularly with CNNs and other image ingestion tools I can see this working very well.
Fiber-Net ramble
Subscribe to:
Post Comments (Atom)
March thoughts
Lets start by taking any system of ordinary differential equations. We can of course convert this to a first order system by creating stand-...
-
A couple days ago I encountered a really neat solution to creating a basis using a Cholesky decomposition. Particularly, when we have a set...
-
Using the family of functions we defined a couple days ago of the form $f_n\left(x\right) = \sum^n_{k=0} c_{k,n} x^k e^{-\frac{x^2}{2}}$ we ...
-
I'll expand upon this a little more later, but I've used the following trick in the past to speed up computations. When evaluating a...
No comments:
Post a Comment