docker run -rm -a ls | docker run -rm -ia grep foobar > $(docker run -rm -a mktemp)
(assuming you have some well defined ls, grep, and mktemp containers)
-i makes it interactive (accepts stdin), -a makes it attached (passes stdout and stderr out), -rm removes the container when it's done, otherwise "docker run ls" would just do an ls inside the container and output goes to log file, headless, exit, but continue to exist in the local docker database, taking up space.
It's kind of a useless example though, and not at all what docker was intended for. It was designed for complete application packages, self contained. So you could do "docker run --name my_webserver -p 80:80 httpd" and it'll download the httpd container (apache) automatically, start it, and forward the container port 80 to external port 80. In the future you can simply go "docker start my_webserver" and it'll start up to shut it down. 5 seconds and I've got a web server up and running that contains all the binaries and libraries that it needs, no need to apt-get anything, don't need to confirm anything about libraries, doesn't even need to be the same flavour of linux to work, as long as it uses kernel 3.8 or up. Similarly with mysql, and many other services, just do a properly configured "run" command and it handles everything. I'm envisioning things like a firefox docker container, or games in a docker container, that sort of thing. If a game ever locks up it's trivial to "docker stop TheGame" and it'll kill everything in the container. Many other benefits.