And since most Android devices record video with H.264 compression, that codec would be available to us for us to encode the video. Members normally choose to share videos that are playable on their devices, meaning that they can be decodable. Format compatibility-wise, we decided that the risk existed, but was low. Using a hardware encoder would offer real-time frame rate and lower battery consumption-both important considerations for the mobile device experience. Our use case was fairly simple: reducing video resolution and/or its bitrate to reduce “throwing away” extra pixels. Hardware encoders have limited codec selection, but are much more performant and power efficient.Īfter some experimentation, we came to the conclusion that a hardware encoder would be a much better fit for our needs and constraints. However, they can be very battery- and CPU- intensive. Software encoders (such as an Android port of ffmpeg) offer a great variety of supported codecs and containers, as well as an ability to perform editing operations (joining/splitting videos, muxing/demuxing tracks, modifying frames, etc.). ![]() Transcoding on Android can be performed by using software or hardware encoders. In this post, I’ll provide a high-level overview of that talk, including how we built the LiTr architecture, how you can use it to transform your media, and why we chose MediaCodec to access the hardware encoder. This fall, I presented LiTr at Demuxed 2019 conference, shortly after open sourcing it. Popularity of android-transcoder and its forks ( editor by selsamman, MP4Composer-android, Transcoder) demonstrated that there is a need in the Android media community for video/audio transcoding/modification tooling. We decided to write a library from scratch and collaborate with android-transcoder project after completion. However, when we estimated the changes we would need to implement, we realized it would entail a major rewrite with an API break.įurthermore, we wanted to be able to modify video frames, which android-transcoder could not do. We discovered an open source solution in android-transcoder, which performed basic hardware accelerated video/audio transcoding on Android. In order to do that, we needed an on-device transcoder. The solution to this “throwaway data” problem was straightforward: transcode the video on the device to throw away those bytes before sending the video over the network. This was very different from our top consumption format of 720p/5Mbps with us essentially creating a lot of bytes being sent to the backend just to be discarded by server transcoding. Out-of-box video recording resolution on Android cameras, at the time, was about 720 to 1080p with bitrate of 12 to 17 Mbps. This led us to focus on looking at the typical capture parameters. We started with an assumption that users are most likely to share content straight from the mobile device they captured it on. ![]() Since video is such a “heavy” consumer of data, any performance gains would significantly improve the user experience. Once the feature was successfully launched and started gaining popularity, we immediately set off to work on performance improvements. Once uploaded, a video would then be transcoded into consumption format and appear in the feed as an update. When posting a video from an Android device, the member could either record it using their device camera app or pick an existing video from the gallery. In 2017, we launched video sharing to give our members the ability to share video content on the feed via the LinkedIn mobile app or a web browser. ![]() If a picture’s worth a thousand words, then what about a video?
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |