89- Cislunar, Proliferated Leo and Accelerating Innovation
Play • 24 min

Listen to Colonel Eric Felt talk about what the Air Force Research Lab (AFRL) is doing to keep the U.S. competitive edge in space. Building a strong team is important and he believes that is part of the secret sauce that makes the AFRL so successful. Learn about cislunar and XGEO and why these are exciting areas to be studying.  Col Felt reflects that the growing interest in cislunar and XGEO comes down to commercial activity, resources and political advantages. He said there are cost and performance advantages coming from proliferated LEO and that every mission in the future that can be done from LEO will be done from LEO. With so many thousands of satellites going up, a high level of automation will be essential for their management; however, there will be limits to this kind of autonomy – especially in a war fighting scenario. 

IoT For All Podcast
IoT For All Podcast
IoT For All
PKI and IoT Device Security in 2021 | Keyfactor's Ellen Boehm
In this episode of the IoT For All Podcast, Ellen Boehm of KeyFactor joins us to share her expert insights on the security landscape of 2021. Ellen shares some best practices for device manufacturers, the effects continued improvements to AI and edge computing will have on device security, and how PKI has emerged as the top technology to secure devices. Ellen Boehm has over 15 years’ experience in leading new product development with a focus on IoT and connected products in lighting controls, smart cities, connected buildings and smart home technology. Currently, she is Senior Director of IoT Product Management at Keyfactor, a leading provider of secure digital identity management solutions. There, Ellen leads the product strategy and go to market approach for the Keyfactor Control platform, focusing on digital identity security solutions for the IoT device manufacturer market. Interested in connecting with Ellen? Reach out to her on Linkedin! About Keyfactor: Keyfactor is in the digital security management space, providing the tools and support needed to secure a company’s digital identity, giving IT and infosec teams the ability to easily manage their digital certificates and keys – whether its protecting data, devices and/or applications across an enterprise. Keyfactor enables manufacturers of connected IoT products to free themselves from the risk of costly warranty recalls and emerging threats by making it easy and affordable to build in high-assurance security identity at each step of the IoT device lifecycle.Key Questions and Topics from this Episode: (00:44) Intro to Ellen Boehm (01:33) Intro to KeyFactor (03:40) What is PKI? (08:01) Why is PKI important? What issues caused PKI to emerge as a top technology in terms of securing IoT devices? (10:44) What’s the best approach when it comes to building secure IoT devices? (13:27) What’s your take on the IoT security landscape? What are some of the biggest challenges you’ve seen facing companies building in this space? (15:38) How do regulations come into all of this - what can companies do to ensure compliance with current regulations and to plan and adjust for the future? (18:33) How do you foresee adoption changing in the new year? What effects do you think COVID has had on the IoT landscape? (22:25) How will the increased emphasis on other leading tech like AI and edge computing affect security for IoT devices? (24:04) What advice do you have for companies interested in building in the IoT space, but don’t know where to start in terms of security?
28 min
The History of Computing
The History of Computing
Charles Edge
Connections: ARPA > RISC > ARM > Apple's M1
Let’s oversimplify something in the computing world. Which is what you have to do when writing about history. You have to put your blinders on so you can get to the heart of a given topic without overcomplicating the story being told. And in the evolution of technology we can’t mention all of the advances that lead to each subsequent evolution. It’s wonderful and frustrating all at the same time. And that value judgement of what goes in and what doesn’t can be tough. Let’s start with the fact that there are two main types of processors in our devices. There’s the x86 chipset developed by Intel and AMD and then there’s the RISC-based processors, which are ARM and for the old school people, also include PowerPC and SPARC. Today we’re going to set aside the x86 chipset that was dominant for so long and focus on how the RISC and so ARM family emerged. First, let’s think about what the main difference is between ARM and x86. RISC and so ARM chips have a focus on reducing the number of instructions required to perform a task to as few as possible, and so RISC stands for Reduced Instruction Set Computing. Intel, other than the Atom series chips, with the x86 chips has focused on high performance and high throughput. Big and fast, no matter how much power and cooling is necessary. The ARM processor requires simpler instructions which means there’s less logic and so more instructions are required to perform certain logical operations. This increases memory and can increase the amount of time to complete an execution, which ARM developers address with techniques like pipelining, or instruction-level parallelism on a processor. Seymour Cray came up with this to split up instructions so each core or processor handles a different one and so Star, Amdahl and then ARM implemented it as well. The X86 chips are Complex Instruction Set Computing chips, or CISC. Those will do larger, more complicated tasks, like computing floating point integers or memory searches, on the chip. That often requires more consistent and larger amounts of power. ARM chips are built for low power. The reduced complexity of operations is one reason but also it’s in the design philosophy. This means less heat syncs and often accounting for less consistent streams of power. This 130 watt x86 vs 5 watt ARM can mean slightly lower clock speeds but the chips can cost more as people will spend less in heat syncs and power supplies. This also makes the ARM excellent for mobile devices. The inexpensive MOS 6502 chips helped revolutionize the personal computing industry in 1975, finding their way into the Apple II and a number of early computers. They were RISC-like but CISC-like as well. They took some of the instruction set architecture family from the IBM System/360 through to the PDP, General Nova, Intel 8080, Zylog, and so after the emergence of Windows, the Intel finally captured the personal computing market and the x86 flourished. But the RISC architecture actually goes back to the ACE, developed in 1946 by Alan Turing. It wasn’t until the 1970s that Carver Mead from Caltech and Lynn Conway from Xerox PARC saw that the number of transistors was going to plateau on chips while workloads on chips were growing exponentially. ARPA and other agencies needed more and more instructions, so they instigated what we now refer to as the VLSI project, a DARPA program initiated by Bob Kahn to push into the 32-bit world. They would provide funding to different universities, including Stanford and the University of North Carolina. Out of those projects, we saw the Geometry Engine, which led to a number of computer aided design, or CAD efforts, to aid in chip design. Those workstations, when linked together, evolved into tools used on the Stanford University Network, or SUN, which would effectively spin out of Stanford as Sun Microsystems. And across the bay at Berkeley we got a standardized Unix implementation that could use the tools being developed in Berkely Software Distribution, or BSD, which would eventually become the operating system used by Sun, SGI, and now OpenBSD and other variants. And the efforts from the VLSI project led to Berkely RISC in 1980 and Stanford MIPS as well as the multi chip wafer.The leader of that Berkeley RISC project was David Patterson who still serves as vice chair of the RISC-V Foundation. The chips would add more and more registers but with less specializations. This led to the need for more memory. But UC Berkeley students shipped a faster ship than was otherwise on the market in 1981. And the RISC II was usually double or triple the speed of the Motorola 68000. That led to the Sun SPARC and DEC Alpha. There was another company paying attention to what was happening in the RISC project: Acorn Computers. They had been looking into using the 6502 processor until they came across the scholarly works coming out of Berkeley about their RISC project. Sophie Wilson and Steve Furber from Acorn then got to work building an instruction set for the Acorn RISC Machine, or ARM for short. They had the first ARM working by 1985, which they used to build the Acorn Archimedes. The ARM2 would be faster than the Intel 80286 and by 1990, Apple was looking for a chip for the Apple Newton. A new company called Advanced RISC Machines or Arm would be founded, and from there they grew, with Apple being a shareholder through the 90s. By 1992, they were up to the ARM6 and the ARM610 was used for the Newton. DEC licensed the ARM architecture to develop the StrongARMSelling chips to other companies. Acorn would be broken up in 1998 and parts sold off, but ARM would live on until acquired by Softbank for $32 billion in 2016. Softbank is currently in acquisition talks to sell ARM to Nvidia for $40 billion. Meanwhile, John Cocke at IBM had been working on the RISC concepts since 1975 for embedded systems and by 1982 moved on to start developing their own 32-bit RISC chips. This led to the POWER instruction set which they shipped in 1990 as the RISC System/6000, or as we called them at the time, the RS/6000. They scaled that down to the Power PC and in 1991 forged an alliance with Motorola and Apple. DEC designed the Alpha. It seemed as though the computer industry was Microsoft and Intel vs the rest of the world, using a RISC architecture. But by 2004 the alliance between Apple, Motorola, and IBM began to unravel and by 2006 Apple moved the Mac to an Intel processor. But something was changing in computing. Apple shipped the iPod back in 2001, effectively ushering in the era of mobile devices. By 2007, Apple released the first iPhone, which shipped with a Samsung ARM. You see, the interesting thing about ARM is they don’t fab chips, like Intel - they license technology and designs. Apple licensed the Cortex-A8 from ARM for the iPhone 3GS by 2009 but had an ambitious lineup of tablets and phones in the pipeline. And so in 2010 did something new: they made their own system on a chip, or SoC. Continuing to license some ARM technology, Apple pushed on, getting between 800MHz to 1 GHz out of the chip and using it to power the iPhone 4, the first iPad, and the long overdue second-generation Apple TV. The next year came the A5, used in the iPad 2 and first iPad Mini, then the A6 at 1.3 GHz for the iPhone 5, the A7 for the iPhone 5s, iPad Air. That was the first 64-bit consumer SoC. In 2014, Apple released the A8 processor for the iPhone 6, which came in speeds ranging from 1.1GHz to the 1.5 GHz chip in the 4th generation Apple TV. By 2015, Apple was up to the A9, which clocked in at 1.85 GHz for the iPhone 6s. Then we got the A10 in 2016, the A11 in 2017, the A12 in 2018, A13 in 2019, A14 in 2020 with neural engines, 4 GPUs, and 11.8 billion transistors compared to the 30,000 in the original ARM. And it’s not just Apple. Samsung has been on a similar tear, firing up the Exynos line in 2011 and continuing to license the ARM up to Cortex-A55 with similar features t…
15 min
Chalk Radio
Chalk Radio
MIT OpenCourseWare
Making Solid State Chemistry Matter (Prof. Jeffrey Grossman)
First-year students who already plan to major in chemistry don’t require any special bells or whistles to motivate them to study the subject. But introductory chemistry is a required subject for all students at MIT, regardless of their intended major, and materials scientist Jeffrey Grossman has found that for many students in his course _3.091 Introduction to Solid State Chemistry_, the subject becomes much more accessible if he takes conscious steps to make it real for them. He does this both inside and outside the classroom. First, he makes sure that part of each lecture he delivers explores the connection between the topic of the lecture and his students’ actual experience. Second, he gives students the chance to play around with real-world materials so they can learn the principles of chemistry firsthand. As Professor Grossman explains in this episode, it was by playing around with materials that the very first chemists began to learn about matter and its properties, and this kind of basic experimentation has an inherently multisensory quality that deepens and enriches students’ understanding of the concepts they learn. Relevant Resources: MIT OpenCourseWare The OCW Educator Portal Professor Grossman’s course on OCW Professor Grossman’s faculty page MIT’s General Institute Requirements (GIRs) “Plenty of Room at the Bottom” (PDF) (Richard Feynman’s lecture on atomic-scale engineering) Music in this episode by Blue Dot Sessions Connect with Us If you have a suggestion for a new episode or have used OCW to change your life or those of others, tell us your story. We’d love to hear from you! Call us @ 617 475-0534 On our site On Facebook On Twitter On Instagram Stay Current Subscribe to the free monthly "MIT OpenCourseWare Update" e-newsletter. Support OCW If you like Chalk Radio and OpenCourseware, donate to help keep those programs going!
12 min
More episodes
Search
Clear search
Close search
Google apps
Main menu