I assume that when using RGBA with UNSIGNED BYTE, that the byte ordering is identical whether on a big or little endian system.
RRRRRRRR
GGGGGGGG
BBBBBBBB
AAAAAAAA
Similarly with BGRA. Right?

The difficulty comes when using components that are sub-byte,
i.e. GL_UNSIGNED_SHORT_4_4_4_4, GL_UNSIGNED_SHORT_5_5_5_1, GL_UNSIGNED_SHORT_5_6_5.
My understanding is that the components are packed big-endian into the native-endian SHORT, right?

Now, when a LITTLE_ENDIAN CPU is talking to a BIG_ENDIAN GPU, it seems that the pixel formats are different, e.g. for
{RGB, GL_UNSIGNED_SHORT_5_6_5, LITTLE_ENDIAN}
RRRRRGGGgggBBBBB
appears in the byte order
gggBBBBB
RRRRRGGG
which looks like
gggBBBBBRRRRRGGG
on a big endian machine. What is this format called on a big-endian machine? My understanding is that the _REV causes components to be packed from the little end, rather than the default big end, right? So _REV does not fix this up, right?

It is easier when an integral number of components fit into a byte, i.e.

{RGBA, GL_UNSIGNED_SHORT_4_4_4_4, BIG_ENDIAN} ==
{ABGR, GL_UNSIGNED_SHORT_4_4_4_4_REV,BIG_ENDIAN} ==
{BARG, GL_UNSIGNED_SHORT_4_4_4_4, LITTLE_ENDIAN} ==
{GRAB, GL_UNSIGNED_SHORT_4_4_4_4_REV,LITTLE_ENDIAN} ==
RRRRGGGG (first byte)
BBBBAAAA (second byte)

RRRRGGGGBBBBAAAA (big endian)
BBBBAAAARRRRGGGG (little endian)

Can you help me make some sense out of this?