"The particular issue is that a library only works when setup just right with the correct input encoding, parameters, input data and output decoding etc. You realistically can't just share a whole library, without also writing an entire program around it, that assumes as a convention all the things the library needs to work."
"Okay, smartass," you might say, "so a single shader library might devise a consistent convention for all its shaders, but how do you pass the same data to different shaders without rearranging it? Huh?"
Back to the library analog. Have you ever seen two 3D engines with mutually compatible data structures? No? How do programmers solve it? They use a single engine, and stick to its conventions in their own code. When interfacing with other libraries, they convert their structures.
Library functions define typed parameters. Shaders define typed parameters, and they don't even require ordered arguments. What is the difference, exactly?
Rearranging the data requires a re-upload and possible expensive processing. An engine or other code might also have different ideas what state to set and how to arrange data, and you'll have to throw it away or rewrite it (a particularly easy proposal for instance of users of three.js) if you want it to work with a "shader library" that isn't explicitely taking care not to trample all over three.js'es conventions. Of course other libraries use other conventions, and unless you want the users of those to throw their library away, you'll have to write a facade, documentation and different shaderset for that library too. So realistically sharing a shader involves writing infrastructure code and shaders for N frameworks/engines/libraries etc. carefully testing each single gorram single one to make sure your "shader lib" works with them.