« Back to Hashtags
#ideathon-quantum
Total ideas4
Created by@Bernard_YesZero
Created on2023/05/27
💡Share your Idea in this ideathon
Create Idea
4 ideas
Bernard_YesZero · 2023/12/11

The possible ways to fault-tolerant quantum computing (FTQC) #953

#ideathon-quantum #ideathon-quantumcomputing #ideathon-neutralatoms In the Q2B 2023 Silicon Valley, John Preskill gave a plenary talk. The title is "Crossing the Quantum Chasm: From NISQ to Fault Tolerance". You can find the PowerPoint here: http://theory.caltech.edu/~preskill/talks/Preskill-Q2B-2023 here is a quick summary: 1. FTQC is the only way to achieve practical quantum value. 2. There are four parts to achieve the FTQC: a. Erasure conversion: we need heralded error detection. b. Biased noise: physically suppress bit flip (X error), error code for phase flip (Z error). c. More efficient codes: like qLDPC code. d. Co-design: code+hardware 3. Neutral atoms will lead the error correction community, because: a. Use Alkaline earth atoms (like Sr, my opinion) to herald error. b. Moveable atoms support qLDPC code. c. Cheaper (my opinion). 4. Big question: How will we scale up to quantum computing systems that can solve hard problems? Answer: We don't know yet. My comments: Neutral atoms are the possible winner of FTQC. But the real question is how much more powerful are quantum computers than classical ones? @EricZhang_ARKS
Bernard_YesZero · 2023/06/08

More qubits v.s. Better qubits #443

#ideathon-quantumcomputing #ideathon-quantum Jay Gambetta announced the "first European quantum data center is coming to Ehningen, Germany in 2024" yesterday. IBM Quantum also unveiled the Osprey chip with up to 433 qubits at the end of 2022. Yes, we see more qubits, and for quantum computation, we also prefer better qubits. Today, in arXiv, a paper (https://arxiv.org/abs/2306.03939v1) focuses on the "IBM Quantum System One" in Ehningen and finds up to seven qubits are capable to violate the local hidden variables model.
Bernard_YesZero · 2023/05/27

Can quantum computer substantially boost the LLM training? #300

#ideathon-quantumcomputing #ideathon-quantum #ideathon-LLM LLMs, e.g. chatGPT really caused a sensation recently. But, the training cost also excludes most companies and players. Can we train LLMs using quantum computers? How about the complexity? In recent arxiv paper (2303.03428, https://arxiv.org/abs/2303.03428v2), the authors propose an efficient solution for generic (or stochastic) gradient descent algorithms, scaling as O(T^2*polylog(n)), where T is the number of iteration and n is the number of parameter, as long as the models are both sufficiently dissipative and sparse, with small learning rates. The authors also benchmark the solution using ResNet with different parameters and find it's possible to obtain quantum enhancement after model pruning. Even though, we do not know about the parameters in GPT-3.5 or GPT-4, but if it's dissipative and sparse just like ResNet, it's possible to REALLY decrease the hugh cost of training. It's really a promising application for quantum computation.