Nach Genre filtern
- 36 - RCE 118: MEEP
Steven G. Johnson is a Professor of Applied Mathematics and Physics at MIT. He works in the field of nanophotonics—electromagnetism in media structured on the wavelength scale, especially in the infrared and optical regimes—where he works on many aspects of the theory, design, and computational modeling of nanophotonic devices, both classical and quantum. He is coauthor of over 200 papers and over 25 patents, including the second edition of the textbook Photonic Crystals: Molding the Flow of Light. In addition to traditional publications, he distributes several widely used free-software packages for scientific computation, including the MPB and Meep electromagnetic simulation tools and the FFTW fast Fourier transform library (for which he received the 1999 J. H. Wilkinson Prize for Numerical Software). http://math.mit.edu/~stevenj/ https://github.com/stevengj/meep https://github.com/stevengj/mpb Ardavan Oskooi is the Founder/CEO of Simpetus, a San Francisco based startup with a mission to propel simulations to the forefront of research and development in electromagnetics. Simpetus is a reference to our vision for simulations being an impetus for new discoveries and technologies. Ardavan received his Sc.D. from MIT where he worked with Professor Steven G. Johnson (thesis: Computation & Design for Nanophotonics) to develop Meep. Ardavan has published 13 first-author articles in peer-reviewed journals and a book "Advances in FDTD Computational Electrodynamics: Photonics and Nanotechnology". Ardavan has a master in Computation for Design and Optimization from MIT and completed his undergraduate studies, with honors, in Engineering Science at the University of Toronto. Prior to launching Simpetus, Ardavan worked as a postdoctoral researcher with Professors Susumu Noda at Kyoto University and Stephen R. Forrest at the University of Michigan on leveraging Meep to push the frontier of optoelectronic device design. Company: www.simpetus.com
Sun, 22 Apr 2018 - 40min - 35 - RCE 117 PMIx
Dr. Ralph H. Castain is a Principal Engineer at Intel, where he focuses on the development of control system technologies for exascale computing systems. Dr. Castain received his B.S. degree in physics from Harvey Mudd College and multiple graduate level degrees (M.S. in solid-state physics, M.S.E.E. degree in robotics, and Ph.D. in nuclear physics) from Purdue University. He has served in government, academia, and industry for over 30 years as a contributing scientist and business leader in fields ranging from HPC to nuclear physics, particle accelerator design, remote sensing, autonomous pattern recognition, and decision analysis. He currently is the founder and leader of the PMIx community (https://pmix.github.io/pmix)
Sat, 18 Nov 2017 - 39min - 34 - RCE 116 Jupyter
Brian Granger is an associate professor of physics and data science at Cal Poly State University in San Luis Obispo, CA. His research focuses on building open-source tools for interactive computing, data science, and data visualization. Brian is a leader of the IPython project, co-founder of Project Jupyter, co-founder of the Altair project for statistical visualization, and an active contributor to a number of other open-source projects focused on data science in Python. He is an advisory board member of NumFOCUS and a faculty fellow of the Cal Poly Center for Innovation and Entrepreneurship.
Sun, 29 Oct 2017 - 43min - 33 - RCE 115 PBS Professional
Dr. Bill Nitzberg is the CTO of PBS Works at Altair and “acting” community manager for the PBS Pro Open Source Project (www.pbspro.org). With over 25 years in the computer industry, spanning commercial software development to high-performance computing research, Dr. Nitzberg is an internationally recognized expert in parallel and distributed computing. Dr. Nitzberg served on the board of the Open Grid Forum, co-architected NASA’s Information Power Grid, edited the MPI-2 I/O standard, and has published numerous papers on distributed shared memory, parallel I/O, PC clustering, job scheduling, and cloud computing. When not focused on HPC, Bill tries to improve his running economy for his long-distance running adventures. http://www.pbspro.org/
Fri, 18 Aug 2017 - 37min - 32 - RCE 114 NetCDF
NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
Fri, 28 Jul 2017 - 44min - 31 - RCE 113 Shifter
Shifter is a prototype implementation that NERSC is developing and experimenting with as a scalable way of deploying containers in an HPC environment. It works by converting user or staff generated images in Docker, Virtual Machines, or CHOS (another method for delivering flexible environments) to a common format. This common format then provides a tunable point to allow images to be scalably distributed on the Cray supercomputers at NERSC. The user interface to shifter enables a user to select an image from their dockerhub account and then submit jobs which run entirely within the container.
Sat, 08 Jul 2017 - 43min - 30 - RCE 112: Stanford Center for Reproducible Neuroscience
Chris Gorgolewski is a co-director of the Stanford Center for Reproducible Neuroscience and a research associate at Stanford University, California, USA. He is interested in enabling new discoveries in human neuroscience by building data-sharing and analysis tools and services, as well as establishing new data standards and data-sharing policies. http://reproducibility.stanford.edu/
Fri, 28 Apr 2017 - 34min - 29 - RCE 111: Deal.II
Deal.ii is a C++ software library supporting the creation of finite element codes and an open community of users and developers.
Sun, 19 Mar 2017 - 37min - 28 - RCE 110: SAGE2
SAGE2 enables groups to work in front of large shared displays in order to solve problems that required juxtaposing large volumes of information in ultra high-resolution. SAGE2 is developed as a complete redesign and implementation of SAGE, using cloud-based and web-browser technologies in order to enhance data intensive co-located and remote collaboration.
Sat, 11 Feb 2017 - 34min - 27 - RCE 109: iRODSThu, 26 Jan 2017 - 46min
- 26 - RCE 108: Academic Torrents
Academic Torrents a distributed system for sharing enormous datasets - for researchers, by researchers. The result is a scalable, secure, and fault-tolerant repository for data, with blazing fast download speeds.
Sun, 23 Oct 2016 - 47min - 25 - RCE 107: Julia
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library. Julia’s Base library, largely written in Julia itself, also integrates mature, best-of-breed open source C and Fortran libraries for linear algebra, random number generation, signal processing, and string processing. In addition, the Julia developer community is contributing a number of external packages through Julia’s built-in package manager at a rapid pace. IJulia, a collaboration between the Jupyter and Julia communities, provides a powerful browser-based graphical notebook interface to Julia.
Wed, 05 Oct 2016 - 49min - 24 - RCE 106: Singularity
Brock Palen and Jeff Squyres speak with Gregory Kurtzer about Singularity. Singularity allows a non-privileged user to "swap out" the operating system on the host for one they control. So if the host system is running RHEL6 but your application runs in Ubuntu, you can create an Ubuntu image, install your applications into that image, copy the image to another host, and run your application on that host in it's native Ubuntu environment! Gregory Kurtzer has created many open source initiatives related to HPC namely: Centos Linux, Warewulf, Perceus, and most recently Singularity. Currently Gregory serves as a member of the OpenHPC Technical Steering Committee and is the IT HPC Systems Architect and Software Developer for Lawrence Berkeley National Laboratory. Singularity: http://singularity.lbl.gov/ GitHub: https://github.com/gmkurtzer/singularity Twitter: https://twitter.com/gmkurtzer / https://twitter.com/SingularityApp
Thu, 15 Sep 2016 - 37min - 23 - RCE 105: Impala
Marcel Kornacker is the Chief Architect for database technology at Cloudera and creator of the Cloudera Impala project. Following his graduation in 2000 with a PhD in databases from UC Berkeley, he held engineering positions at several database-related start-up companies. Marcel joined Google in 2003 where he worked on several ads serving and storage infrastructure projects, then became tech lead for the distributed query engine component of Google's F1 project.
Sat, 09 Apr 2016 - 42min - 22 - RCE 104: D-Wave Quantum Computing
Edward (Denny) Dahl is a Ph.D. physicist who has been at D-Wave Systems for over four years. He works with customers to help them understand the principles of adiabatic quantum computing as implemented in the D-Wave 2X System. He is currently on assignment at the Los Alamos National Laboratory, which recently purchased a one-thousand qubit system from D-Wave. His interests are quantum programming, playing the guitar and exploring the high deserts of north central New Mexico.
Sat, 19 Mar 2016 - 44min - 21 - RCE 103: EasyBuild
EasyBuild is a software build and installation framework that allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way. EasyBuild homepage: http://hpcugent.github.io/easybuild
Sun, 14 Feb 2016 - 41min - 20 - RCE 102: Spack
Spack is a package management tool designed to support multiple versions and configurations of software on a wide variety of platforms and environments. It was designed for large supercomputing centers, where many users and application teams share common installations of software on clusters with exotic architectures, using libraries that do not have a standard ABI. Spack is non-destructive: installing a new version does not break existing installations, so many configurations can coexist on the same system. https://github.com/scalability-llnl/spack Todd is a computer scientist in the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory . His research focuses on scalable tools for measuring, analyzing, and visualizing performance the performance of massively parallel simulations. Todd works closely with production simulation teams at LLNL, and he likes to create tools that users can pick up easily. Frustrated with the complexity of building HPC performance tools, Todd started developing Spack two years ago to allow users to painlessly install software on big machines. Spack has since been adopted by Livermore Computing, other HPC centers, and LLNL application teams. The open source project now includes several core developers at LLNL and a rapidly growing community on GitHub. A 1.0 release is coming soon.
Fri, 30 Oct 2015 - 38min - 19 - RCE 101: Conduit
Conduit is an open source project from Lawrence Livermore National Laboratory. It provides an intuitive model for describing hierarchical scientific data in C++, C, Fortran, and Python and is used for data coupling between packages in-core, serialization, and I/O tasks. Docs: http://scalability-llnl.github.io/conduit/ Repo: https://github.com/scalability-llnl/conduit Cyrus is a computer scientist and group leader in the Applications, Simulations, and Quality (ASQ) division of LLNL's Computation directorate. He is the software architect of the VisIt open source visualization tool and leads major aspects of the technical direction of the project. Cyrus also provides custom data analysis solutions for large scale scientific simulations in WCI's WSC and WPD programs. Cyrus obtained a B.S. (2003) and Masters (2004) of Computer Engineering from the University of Florida. He joined LLNL in February 2005.
Sun, 04 Oct 2015 - 36min - 18 - RCE 100: Fasterdata
https://fasterdata.es.net/ http://www.es.net/about/esnet-staff/office-of-the-cto/Eli-Dart/ Eli Dart is a network engineer in the ESnet Science Engagement Group, which seeks to use advanced networking to improve scientific productivity and science outcomes for the DOE science facilities, their users, and their collaborators. Eli is a primary advocate for the Science DMZ design pattern, and works with facilities, laboratories, universities, science collaborations, and science programs to deploy data-intensive science infrastructure based on the Science DMZ model. Eli also runs the ESnet network requirements program, which collects, synthesizes, and aggregates the networking needs of the science programs ESnet serves. Eli has over 15 years of experience in network architecture, design, engineering, performance, and security in scientific and research environments. His primary professional interests are high-performance architectures and effective operational models for networks that support scientific missions, and building collaborations to bring about the effective use of high-performance networks by science projects. As a member of ESnet's Network Engineering Group, Eli was a primary contributor to the design and deployment of two iterations of the ESnet backbone network - ESnet4 and ESnet5. Prior to ESnet Eli was a lead network engineer at NERSC, DOE's primary supercomputing facility, where he co-led a complete redesign and several years of successful operation of the high-performance network infrastructure there. In addition, Eli spent 14 years as a member of SCinet, the group of volunteers that builds and operates the network for the annual IEEE/ACM Supercomputing conference series, from 1997 through 2010. He served as Network Security Chair for SCinet for the 2000 and 2001 conferences and was a member of the SCinet routing group from 2001 through 2010. Eli holds a Bachelor of Science degree in Computer Science from the Oregon State University College of Engineering.
Fri, 19 Jun 2015 - 45min - 17 - RCE 99: perfSONAR
Jason Zurawski is a Science Engagement Engineer at the Energy Sciences Network (ESnet) in the Scientific Networking Division of the Computing Sciences Directorate of the Lawrence Berkeley National Laboratory. ESnet is the high performance networking facility of the US Department of Energy Office of Science. ESnet''s mission is to enable those aspects of the DOE Office of Science research mission that depend on high performance networking for success. Jason's primary responsibilities include working with members of the research community to identify the roll of networking in scientific workflows, evaluate current requirements, and suggest improvements for future innovations. Jason's professional interests include network monitoring and performance measurement, high performance computing, grid computing, and application development. He is a founding member of several open source R&E software developments, including perfSONAR, OWAMP, BWCTL, NDT, and OSCARS. Jason has worked in computing and networking since 2007, and has a B.S. in Computer Science & Engineering from The Pennsylvania State University earned in 2002, and an M.S. in Computer and Information Science from The University of Delaware earned in 2007. He has previously worked for the University of Delaware and Internet2. Jason resides and works in the Washington DC metro area, and may be reached via email at zurawski@es.net.
Wed, 20 May 2015 - 46min
Podcasts ähnlich wie RCE - Super Computers
- Global News Podcast BBC World Service
- El Partidazo de COPE COPE
- Herrera en COPE COPE
- Tiempo de Juego COPE
- The Dan Bongino Show Cumulus Podcast Network | Dan Bongino
- Es la Mañana de Federico esRadio
- La Noche de Dieter esRadio
- Hondelatte Raconte - Christophe Hondelatte Europe 1
- Affaires sensibles France Inter
- La rosa de los vientos OndaCero
- Más de uno OndaCero
- La Zanzara Radio 24
- Les Grosses Têtes RTL
- L'Heure Du Crime RTL
- El Larguero SER Podcast
- Nadie Sabe Nada SER Podcast
- SER Historia SER Podcast
- Todo Concostrina SER Podcast
- 安住紳一郎の日曜天国 TBS RADIO
- TED Talks Daily TED
- The Tucker Carlson Show Tucker Carlson Network
- 辛坊治郎 ズーム そこまで言うか! ニッポン放送
- 飯田浩司のOK! Cozy up! Podcast ニッポン放送
- 武田鉄矢・今朝の三枚おろし 文化放送PodcastQR