FPGAs are a means of implementing hardware, and the main implementation methodology is RTL-based. […] However, there is a sea change going on in how systems are designed. They are written in software programming languages but instead of being compiled simply into code for a processor they can be processed into a hybrid system with much of the system implemented in software but the most performance or power critical portions implemented in the programmable fabric.
For example, a high end Zynq has a quad-core ARM Cortex-A53 processor, a dual-core ARM Cortex R-5 processor and an ARM Mali-400MP GPU and an H.265 video Codec. This is more like a blade server on a chip than a traditional FPGA. To accelerate a particular procedure (such as a video or DSP algorithm) it is only necessary to mark it for speed up. Under the hood, the C code will be synthesized into the FPGA fabric and all the plumbing constructed to link it to the C source code. It will run just like the C code on the ARM processor, only faster (and probably with lower power).
One of the key technologies for doing this is high-level synthesis which is allowing engineers with negligible hardware design experience to implement high performance systems. […] This is not purely a Xilinx trend: I attended a presentation at DAC last year by engineers from Google who were experts in video standards about how they had implemented their own proprietary algorithms with high level synthesis and no hardware knowledge.
The reason these new software-focused environments are so important is that the experts in the algorithms required for a successful systems all work at a very high level and are not (usually) RTL-literate. Besides, taking an algorithm for something like vision processing and implementing it in RTL is just too slow and each improvement to the algorithm results in too much rework. These are not algorithms that are stable, they are areas where companies attempt to differentiate themselves and continuously improve their solutions.