Brooklyn 5G: Highlights from Day 1

Article By : Martin Rowe

Day 1 of the sixth annual Brooklyn 5G Summit featured keynotes and panels covering where 5G stands and where it might go

Brooklyn, N.Y. — The sixth annual Brooklyn 5G Summit opened yesterday at the NYU Tandon School of Engineering. Sponsored by Nokia and organized by NYU Wireless, Wednesday’s session of keynotes and panel discussions covered topics such as industries that can benefit from 5G, an update on 3GPP standards and where they might go, and how machine-learning (ML) algorithms are becoming tools for wireless engineers.

Following welcoming remarks from NYU Wireless founder Prof. Ted Rappaport and Nokia Bell Labs CTO and president Marcus Weldon, Nokia president and CEO Rajeev Suri spoke from Finland by video, where he declared, “5G has been released to the wild.” While some politicians may describe the race to deploy 5G as confrontational, Suri said that isn’t so because 5G is a worldwide effort from which people in many countries will benefit. “Everyone can win.”

“We think 5G will bring into the digital age industries such as utilities that were left behind by 2G, 3G, and 4G. They didn’t adopt LTE, and 5G’s most exciting applications won’t be in the consumer space.” Suri sees smart sensors, edge computing, network slicing, and 5G New Radio (5G NR) making for end-to-end 5G connectivity. He also sees 5G creating wealth (and hopefully jobs), saving lives though improvements in medicine and public safety, and bringing telco-class networks to enterprises. But security will become even more of an issue, he told the audience. He closed by declaring 2019 as “Year Zero of the 6G research era.”

AT&T CTO Andre Fuetsch then took the stage to discuss how AT&T is deploying 5G. “Our robust consumer and enterprise business is growing,” he said, adding that AT&T’s network passes 253 Tbytes of data every day, with 57% being video, and that portion is growing. Given AT&T’s acquisition of Time Warner, that will produce further gains in video distribution through the wired and wireless networks.

Fuetsch noted how AT&T is deploying 5G in 19 cities, the first being Waco, Texas. “We have mmWave working in Waco,” he said, while showing the slide indicating a download speed of 1.5 Gbits/s.

ATT_5G_router

AT&T CTO Andre Fuetsch showed a 5G mobile router and its 1.5-Gbit/s download speed. Photo by Martin Rowe

Fuetsch said that AT&T is learning as they go, as are the other carriers. “We need better modeling tools for mmWave signals.” As we saw later in the day, researchers are using ML to characterize those channels.

Following Fuetsch’s talk, we heard about the state of 5G standards and where future releases might go from talks by Ericsson’s Mikael Hook and Qualcomm’s John Smee.

“3GPP Release 16 is coming,” said Hook. “Release 17 is in the planning stage and the Release 18 roadmap is open. Release 19 should come in 2022 or 2023.” He then discussed some of the possible features in these releases. The slide below gives an overview:

NRevolution

Ericcson’s Mikael Hook provided a glimpse into how 3GPP Releases 16, 17, and 18 might look. Photo by Martin Rowe.

“Release 16 will expand 5G,” Smee told the conference crowd. “Releases 17 and 18 will feature frequencies at 60 GHz and higher.” He also forsees a denser wireless network, which will be needed to support mmWave signals. He also expects edge cloud computing to allow for simple, inexpensive user devices, although some devices will need on-device data processing. Virtual reality and augmented reality will become commonplace.

Smees_5Gservices

5G is expected to bring enhanced services such as AR and VR in future releases through frequencies above 60 GHz. Photo by Martin Rowe.

Hook and Smee were joined by AT&T’s Arun Ghosh, T-Mobile’s Karri Kuoppamaki, OPP’s Hai Tang, Nokia’s Harish Viswanathan, and Huawei’s Peiying Zhu for a panel moderated by Prof. Robert Heath of UT Austin. “What’s wrong with 3GPP Release 15?” asked Heath.

“Release 15 is a starting point for 5G,” answered Ghosh. “We won’t go from LTE directly to full 5G standalone. The transition will occur in phases.” Kuoppamaki added, “We should write the standards based on business needs.”

Bklyn5G morning panel

Brooklyn 5G standards panel (l–r) Hai Tang, Mikael Hook, Harish Viswanathan, Arun Ghosh, Peiying Zhu, John Smee, Karri Kuoppamaki, and moderator Dr. Robert Heath. Photo by Martin Rowe.

Heath asked, “What should we get in Release 16?” Zhu replied by saying that we need LTE enhancement because the transition will start with non-standalone deployments in which LTE will operate alongside 5G. “Release 16 should address IIoT and lower-power IoT devices. Release 17 should address high-density networks for mmWave.” Smee added, “Lower-latency and high-density mobility can’t be compromised. It will be consumer versus industrial needs in future releases.”

Heath then asked the panelists what the value of AI and ML is to 3GPP. “ML is a tool,” started Smee. “It will be used for link adaptation.” Zhu concurred with Smee that ML and AI can be used to optimize a link channel. “We don’t need a standard for ML,” she said. Ghosh disagreed, arguing that ML tools will need a lot of data and, thus, there should be a standard for extracting data from the network. Ghosh also noted that ML might replace communications theory. That was the topic of two afternoon talks and a panel session, covered on the next page.

The nature of machine learning

Machine learning is an engineering tool
In the afternoon, Stanford University professor Andrea Goldsmith showed how ML can indeed be used to model a transmission channel. By processing the signal before and after transmission through a channel, an ML algorithm can learn how the channel responds, and the result can be used in receiver design. “We don’t have good models for transmission channels, and we found that ML proved better than communications theory at characterizing it.”

Goldsmith noted that researchers at Stanford couldn’t find an ML algorithm for characterizing the channel, so they created their own. The algorithm looks at the incoming signal using a sliding bidirectional recurrent neural network (SBRNN) to sample the signal. Think of it like signal averaging.

MLfor_channel

Researchers at Stanford developed a sliding recurrent neural network algorithm to use machine learning for characterizing a 5G transmission channel. Photo by Martin Rowe.

“The algorithm is useful when we don’t have good channel models,” said Goldsmith. “If you’ve already developed solid channel models through other methods, then you don’t need ML.”

Prof. Tim O’Shea of Virginia Tech followed Goldsmith. He also explained how ML can be used to characterize a physical layer. “Using machine learning, we can look at a transmission channel as a single system and model it. With such models, you can use inexpensive hardware and do more of the signal processing in software.” Furthermore, O’Shea cited how engineers can not only model channels but use ML tools to minimize power consumption.

Bklyn5G OShea

Machine learning can use data to learn how a PHY can be modeled in software, reducing hardware costs. Photo by Martin Rowe.

Goldsmith and O’Shea then joined a panel that included Siddarth Garg of NYU Wireless, Erik Stauffer of Google, and Ali Yazdan of Facebook to discuss ML. Although Goldsmith noted that ML did a better job at modeling a wireless channel than was possible though communications theory, she didn’t see ML replacing communications engineers. As earlier panelists noted, ML is a tool that engineers can use, but working designs still need an engineer’s experience and intuition. “Hardware engineering won’t become a discipline of software engineering,” said Goldsmith. O’Shea added, “ML is really about optimization. You still need to know radio communication systems, but older modeling techniques may become less used.”

“ML is still brittle,” noted Goldsmith. “It will improve, just as control-system models were once brittle.”

Stauffer and Yazdan looked at ML from their perspectives, discussing how their companies use AI and ML to analyze user data. They noted its fallacies, particularly in face recognition. That led to a discussion about the data that’s used to implement ML. If it’s biased, then the algorithm learns from biased data. Garbage in, garbage out.

Leave a comment