Ask HN: Are Modular Neural Networks an interesting avenue for further research?

Modular/Multiple Neural networks (MNNs) revolve around training smaller, independent networks that can feed into each other or another higher network https://ift.tt/2O42qv7

In principle, the hierarchical organization could allow us to make sense of more complex problem spaces and reach a higher functionality, but it seems difficult to find examples of concrete research done in the past regarding this. I've found a few sources:

https://ift.tt/2FStPAD

https://ift.tt/2P4f6SM

A few concrete questions I have:

Are there any tasks where MNNs have shown better performance than large single nets?

Could MNNs be used for multimodal classification, i.e. train each net on a fundamentally different type of data, (text vs image) and feed forward to a higher level intermediary that operates on all the outputs?

From a software engineering perspective, aren't these more fault tolerant and easily isolatable on a distributed system?

Has there been any work into dynamically adapting the topologies of subnetworks using a process like Neural Architecture Search?

Generally, are MNNs practical in any way?

Apologies if these questions seem naive, I've just come into ML and more broadly CS from a biology/neuroscience background and am captivated by the potential interplay.

I really appreciate you taking the time and lending your insight!


Comments URL: https://news.ycombinator.com/item?id=18586775

Points: 8

# Comments: 0



from Hacker News: Front Page https://ift.tt/2FVPVSS
via

Comments

Popular posts from this blog