Industry Program Chair: Fa-Long Luo, Micron Technology Inc., USA
The commercial launch of 5G in 2019 ushered in a new era for wireless technology, with the promise of higher throughput, lower latency and diverse use cases. Today more than 100 operators worldwide have either commenced or completed the initial phase of 5G deployment, despite the ongoing challenges from the public health crisis. While the wireless research community enjoys the moment of collective accomplishment, the natural question to ask ourselves is “What’s next”?
With the recent history and experience on 5G as our guide, we will share our perspective and shed some light on the state of the art for 5G as well as the initial 6G vision of bringing the next hyper-connected experience to every corner of life. We intend to provide a holistic view from an industry perspective that includes megatrends driving technology evolution towards 6G, new services envisioned and enabled, as well as technical requirements to realize these new services. The expected technical requirement on throughput, architecture and security will likely be a major step up compared to 5G requirements, and it is therefore critical for the research community to start early and develop technologies to overcome these challenges.
While 6G technologies are still in its early days, a few emerging directions are taking shape and gaining momentum in academia and industry alike, including the support of a new spectrum such as Terahertz (THz) band, novel an¬tenna technologies, evolution of duplex technology and network topology, spectrum sharing, AI as a native part of the protocol design, etc.
In addition to the conventional role of RF circuits in communication and information exchange, another promising direction is the use of RF signals for privacy-preserving secure sensing and ranging. Ultra-Wideband (UWB) technology is a good example where the unique combination of precision and protection is likely to become the basis for secure transactions of all kinds. UWB shows promise for the evolution of secure transactions in mobile devices, by driving the convergence of hands-free access, location-based services, payments, identification, and device-to-device interactions. The FiRaTM Consortium is actively leading the rapid industry adoption of UWB technology through ecosystem building and development of technical specifications and certification programs. This trend of convergence between RF-based communications and sensing will likely accelerate as we explore THz bands, where dense arrays can greatly improve the spatial resolution of these RF sensing solutions and narrow the gap with other modalities.
Charlie (Jianzhong) Zhang is SVP and head of the Standards and Mobility Innovation Team at Samsung Research America, where he leads research, prototyping, and standards for 5G and future multimedia networks. He is also currently serving as the Chairman of the Board with FiRa Consortium, and a member of the FCC Technology Advisory Council. From 2009 to 2013, he served as the Vice Chairman of the 3GPP RAN1 working group and led development of LTE and LTE-Advanced technologies such as 3D channel modeling, UL-MIMO, CoMP, Carrier Aggregation for TD-LTE. He received his Ph.D. degree from the University of Wisconsin, Madison. Dr. Zhang is a Fellow of IEEE.
With over 466 million people suffering from disabling hearing loss globally according to the World Health Organization, and the number expected to rise to 900 million people by 2050, hearing aids are crucially important medical wearable devices. Untreated hearing loss has been linked to increased risks of social isolation, depression, dementia, fall injuries, and other health issues. However, partly due to a historical stigma associated with assistive devices and its single-function nature, only a small fraction of people who need help with hearing have actually adopted the devices.
In this talk, we will present a new class of multifunctional in-ear devices with embedded sensors and artificial intelligence. In addition to providing frequency-dependent amplification of sound to compensate for the wearer’s hearing loss, these devices continuously classify sound and enhance speech, serve as a continuous monitor for physical and cognitive health, an automatic fall detection and alert system, as well as a personal assistant with connectivity to the cloud. Furthermore, these ergonomically designed devices stream phone calls and music with an astounding all-day battery life, translate languages, transcribe speech, and remind the wearer of medication and other tasks.
Rapid progress in sensors and artificial intelligence is bringing an amazing array of new devices, applications, and user benefits to the world. Now, these advanced technologies are transforming the traditional hearing aids into multipurpose devices, helping people not only hear better, but live better lives in many more ways.
Dr. Achin Bhowmik is the Chief Technology Officer and Executive Vice President of engineering at Starkey, a privately-held medical devices company with over 5,000 employees and operations in over 100 countries. In this role, he is responsible for the company’s technology strategy, global research, product development and engineering departments, and leading the drive to transform hearing aids into multifunction health and communication devices with advanced sensors and artificial intelligence technologies.
Prior to joining Starkey, Dr. Bhowmik was vice president and general manager of the Perceptual Computing Group at Intel Corporation, where he was responsible for the R&D, operations, and businesses in the areas of 3D sensing and interactive computing, computer vision and artificial intelligence, autonomous robots and drones, and immersive virtual and merged reality devices.
Dr. Bhowmik is on the faculty of Stanford University, where he holds an adjunct professor position at the Stanford School of Medicine, advises research, and lectures in the areas of cognitive neuroscience, sensory augmentation, computational perception, and intelligent systems. He has held adjunct and guest professor positions at the University of California, Berkeley, Liquid Crystal Institute of the Kent State University, Kyung Hee University, Seoul, and the Indian Institute of Technology, Gandhinagar.
He serves on the board of trustees for the National Captioning Institute, board of directors for OpenCV, industry advisory board for Biomedical Engineering at the University of Minnesota, and board of advisors for the Fung Institute for Engineering Leadership at the University of California, Berkeley. He is also on the board of directors and advisors for several technology startup companies.
His awards and honors include Fellow of the Institute of Electrical and Electronics Engineers (IEEE), Fellow and President-Elect of the Society for Information Display (SID), Industrial Distinguished Leader award from the Asia-Pacific Signal and Information Processing Association, IEEE Distinguished Industry Speaker award, TIME’s Best Inventions, Red Dot Design award, and the Artificial Intelligence Breakthrough award. He has authored over 200 publications, including two books and 39 issued patents.
Dr. Bhowmik and his work have been covered in numerous press articles, including TIME, Fortune, Wired, USA Today, US News & World Reports, Wall Street Journal, CBS News, Forbes, Bloomberg Businessweek, Scientific American, Popular Mechanics, MIT Technology Review, etc.
Quantum computing has captured the imagination of scientists and enthusiasts alike. Investment in this area has sky-rocketed with several companies and start-ups now offering or promising to offer custom access to quantum devices. In particular, IBM has offered an extended portfolio of devices to the broad community for research purposes.
This talk will first review the basics of quantum computing as well as present an overview of potential applications. Then, this talk will present how a quantum computer is implemented at IBM including how control signals are applied to the quantum computer along with potential topics of interest to signal processing. Looking into the future, quantum error correction (QEC) will likely be necessary for full scale quantum computing. This talk will review QEC and some of the associated overhead, and highlight progress in this field.
To eventually construct a quantum computer comprising potentially millions of qubits will likely require multiple science disciplines to work together and co-design such a quantum computer. This opens up exciting prospects for new collaborative avenues.
Matthias Steffen received the B.S. degree in physics from Emory University, Atlanta, GA, USA, in 1998, and the M.S. and Ph.D. degrees in electrical engineering from Stanford University, Stanford, CA, USA, in 2000 and 2003, respectively.
He has been working in the field of quantum computing since 1998 and focused on a variety of approaches toward building a quantum computer. His Ph.D. thesis focused on testing small prototype quantum computers using nuclear spins in a liquid solution, while his postdoctoral work and work at IBM centered on advancing superconducting quantum bits. He joined IBM Watson, Yorktown Heights, NY, USA, in 2006 and managed the Experimental Quantum Computing Group from 2010 to 2014, after which he became the Chief Quantum Architect.
He has authored or coauthored over 40 peer-reviewed articles in the field of quantum computing., Dr. Steffen was appointed as an APS Fellow in 2013 for his contributions to the quantum computing field. In 2017, he was named IBM Fellow for his work in quantum computing.
Video conferencing was invented several decades ago, and fortunately technology advances have improved the quality, economics, and accessibility making it widely available during the pandemic. While current video conferencing is immensely valuable, it is still not a replacement for in-person collaboration. Challenges include un-naturalness because of lack of eye-contact and gaze awareness, and the inability to seamlessly collaborate on a shared surface as typically occurs in-person on a white board or sheet of paper. Further along the spectrum of collaboration is Augmented Reality (AR), which in principle can be incorporated in many everyday activities, with the potential to augment human intelligence and productivity. AR still has many challenges, and in addition to user experience issues there are the cost, weight, power and capability limitations of AR headsets. Furthermore, most current AR applications are largely single user, however the ability to have natural collaboration across multiple users can benefit many valuable use cases. This talk will discuss some of the key challenges to be overcome in the above areas, and how industry advances in ICASSP’s core areas and adjacent areas such as sensors, wireless networking, edge and cloud compute can help overcome these challenges, enable improved capabilities, and change the economics and accessibility to make them as practical and widely used as video conferencing is today.
John Apostolopoulos was VP & CTO of Cisco's largest business (about $22B/year) covering enterprise networking, data center networking, and Internet of Things. As CTO, John drove the technology and architecture direction in strategic areas for the business. He also founded Cisco’s Innovation Labs to create innovations that unlock opportunities in strategic areas for the future. Coverage includes intent-based networking, wireless (e.g., Wi-Fi 6/7, private 5G, OpenRoaming, RF sensing), SDWAN and cloud networking, edge computing, application-aware networking, networked interactive video/AR/VR, indoor-location-based services, smart cities, connected vehicles, co-design of networking and security, and machine learning, deep learning, machine reasoning, and other forms of AI applied to these areas. Previously, John was Lab Director for the Mobile & Immersive Experience (MIX) Lab at HP Labs. The MIX Lab conducted research on novel mobile devices and sensing, mobile client/cloud multimedia computing, immersive environments, video & audio signal processing, computer vision & graphics, multimedia networking, glasses-free 3D, wireless, and user experience design. John is an IEEE Fellow, IEEE SPS Distinguished Lecturer, named “one of the world’s top 100 young innovators” by MIT Technology Review, contributed to the US Digital TV Standard (Engineering Emmy Award), and his work on media transcoding in the middle of a network while preserving end-to-end security (secure transcoding) was adopted in the JPSEC standard. He published over 100 papers, receiving 5 best paper awards, and about 100 granted US patents. John was a Consulting Associate Professor of EE at Stanford. He received his B.S., M.S., and Ph.D. from MIT.
John served as chair of IEEE Image, Video, & Multidimensional Signal Processing TC (08-09), and technical co-chair for IEEE ICIP'07, MMSP'11, ESPA'12, and Packet Video’13. He also served on the IEEE SPS Board of Governors.
The Versatile Video Coding (VVC) standard was finalized in 2020. It provides superior compression capability and often reaches about 50 % bit-rate reduction compared to its predecessor, the High Efficiency Video Coding (HEVC). VVC was designed not just to obtain the best possible compression performance but also to be easily adjustable to the needs of the current and emerging video services, including low-delay applications, adaptive bit-rate (ABR) streaming, and immersive video.
Applications where the end-to-end delay is important include video conferencing and remote operation of machines, for example. VVC reaches a significantly lower end-to-end latency compared to earlier codecs, thanks to its gradual decoding refresh feature. This feature avoids the necessity of transmitting intra-coded pictures and thus enables matching the encoded video bit-rate with the network throughput at all circumstances.
The same video content is encoded at multiple spatial resolutions for ABR streaming over the Internet. The reference picture resampling feature of VVC enables resolution switching in ABR streaming with enhanced compression efficiency, which comes on top of the already superior compression of VVC.
A very high bit-rate is typically required in immersive video services, such as streaming of 360° video. The transmitted bit-rate can be reduced when the portion that the user is viewing, i.e. the viewport, is delivered at a higher quality. The subpicture feature of VVC makes viewport-adaptive streaming more straightforward and compression-efficient compared to the use of earlier codecs.
This talk explains what makes VVC versatile to different purposes and reviews the usage and benefits of VVC for a range of applications and video content types, including those outlined above.
Dr. Miska M. Hannuksela is a Bell Labs Fellow and the Head of Video Research with Nokia Technologies. He has been with Nokia, Tampere, Finland, since 1996 in different roles, including research manager/leader positions in the areas of video and image compression, end-to-end multimedia systems, as well as sensor signal processing and context extraction. Dr. Hannuksela has published more than 180 journal and conference papers. He has been an active delegate in various video and multimedia standardization groups since 1999, has co-authored more than 1000 standardization contributions, and has served as an editor in many standards, including the standard amendment for encapsulating VVC in MP4 files, for example. He was an Associate Editor of IEEE Transactions on Circuits and Systems of Video Technology from 2010 to 2015.
This talk will present the relevant historical events and the evolution of the blockchain and Distributed Ledger Technology (DLT). In 2009, Bitcoin was launched as a peer-to-peer electronic cash platform. In 2015, programmable blockchain projects such as Ethereum, Hyperledger Fabric, and R3 Corda were launched. Over the last decade, Blockchain/DLT has evolved to encompass a collection of distributed computer network architectures implementing various decentralized consensus protocols. The blockchains/DLTs can be categorized as public, private or hybrid types. On the public blockchains, various projects are under development to support the decentralized/token economy. This talk will highlight various innovative experimental efforts on public blockchains such as Decentralized Internet (Web 3.0), Initial Coin Offering (ICO), Initial Exchange Offering (IEO), Security Token Offering (STO), Decentralized Finance (DeFi), and Non-Fungible Tokens (NFTs). Enterprise and government use cases are primarily built on private blockchains. This talk will cover the enterprise blockchain landscape including various alliances, consortia, and Blockchain-as-a-Service (BaaS) platforms. The enterprise applications span a broad range of areas including finance, supply chain, Internet of Things (IoT), energy, cybersecurity, healthcare, pharma, transportation, insurance, and more. At a high-level, this presentation will provide an overview of the blockchain ecosystem and the current landscape.
Ramesh Ramadoss is a co-chair of the IEEE Blockchain Initiative. He leads the IEEE Blockchain Standards Committee. He also serves on the Expert Panel of the European Union Blockchain Observatory. He received his Doctor of Philosophy (PhD) degree in Electrical Engineering from the University of Colorado at Boulder, USA in 2003. Between 2003 and 2007, he was employed as an Assistant Professor in the Department of Electrical and Computer Engineering at Auburn University, Alabama, USA. In 2008, he moved to Silicon Valley and has over a decade of industrial experience in technology companies. He has conducted R&D projects for DARPA, NASA, US Army, US Air Force, Sandia National Labs, and Motorola Labs. He is the author or co-author of 1 book, 3 book chapters and 55 research papers (Google Scholar Citations: 936). He has delivered talks at over 90 international conferences in 40 countries.