You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm thinking of using FNOs for classification problems. For simplicity just consider a binary classification problem when we need to predict class 0 or class 1. For this problem, the output of the network should be a real number. However, the output of the neuraloperators is a function, and so, we need to use some integral operator to map the function to a real number, which we can feed to a sigmoid function whose range is $[0,1]$.
I see two approaches.
Use an integral operator in the neuraloperator library, if it is already present. I was thinking of using integral operator but that produces a function as the output, not a real number. Also, it seems to use graphs, which may be more complex than necessary because my inputs are structured images.
Write my own integral operator based on the LpLoss object? I was thinking of something like $I(u) = \int_\Omega u(x) \kappa_{\Phi}(x,y) u(y) dxdy$ which is analogous to the weighted norm for matrices $x^T K x$. The operator $I(u)$ would map the output of the FNO $u(x)$ to a real number. $\kappa_{\Phi}$ is a neural network whose parameters would be learned. However, the LpLoss object is a norm and it's output is positive, where as, I need my output to be both positive and negative.
Edit: Perhaps it is best to evaluate the integral $\int_\Omega u(x) g(x) dx$ where g(x) is an output of an FNO layer applied to u(x). That is, $g(x) = \int_\Omega k(x-y)u(y)dy + Wu(x)$. This integral can be approximated the same way the $L_2$ norm is approximated.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm thinking of using FNOs for classification problems. For simplicity just consider a binary classification problem when we need to predict class 0 or class 1. For this problem, the output of the network should be a real number. However, the output of the neuraloperators is a function, and so, we need to use some integral operator to map the function to a real number, which we can feed to a sigmoid function whose range is$[0,1]$ .
I see two approaches.
Use an integral operator in the neuraloperator library, if it is already present. I was thinking of using integral operator but that produces a function as the output, not a real number. Also, it seems to use graphs, which may be more complex than necessary because my inputs are structured images.
Write my own integral operator based on the LpLoss object? I was thinking of something like$I(u) = \int_\Omega u(x) \kappa_{\Phi}(x,y) u(y) dxdy$ which is analogous to the weighted norm for matrices $x^T K x$ . The operator $I(u)$ would map the output of the FNO $u(x)$ to a real number. $\kappa_{\Phi}$ is a neural network whose parameters would be learned. However, the LpLoss object is a norm and it's output is positive, where as, I need my output to be both positive and negative.
Edit: Perhaps it is best to evaluate the integral$\int_\Omega u(x) g(x) dx$ where g(x) is an output of an FNO layer applied to u(x). That is,
$g(x) = \int_\Omega k(x-y)u(y)dy + Wu(x)$ . This integral can be approximated the same way the $L_2$ norm is approximated.
Which approach seems most feasible for you?
Beta Was this translation helpful? Give feedback.
All reactions