Path: csiph.com!weretis.net!feeder9.news.weretis.net!panix!.POSTED.panix6.panix.com!nan.users.panix.com!robomod!not-for-mail From: Carl Fink Newsgroups: comp.ai Subject: Re: Is it possible to train generative AI on CPU only? Date: Wed, 4 Dec 2024 09:47:36 EST Organization: PANIX Public Access Internet and UNIX, NYC Approved: comp.ai Approval Key Message-ID: References: Reply-To: carl@finknetwork.com Mime-Version: 1.0 Content-Type: text/plain; charset=646 Content-Transfer-Encoding: 8bit Injection-Info: reader2.panix.com; posting-host="panix6.panix.com:166.84.1.6"; logging-data="4135"; mail-complaints-to="abuse@panix.com" User-Agent: slrn/1.0.3 (NetBSD) X-Comp-AI-Policy: https://logological.org/comp.ai X-Comp-AI-Info-1: Send submissions to comp-ai@nan.users.panix.com X-Comp-AI-Info-2: Send technical complaints to comp-ai-request@nan.users.panix.com X-Comp-AI-Info-3: Send complaints about policy to comp-ai-request@nan.users.panix.com X-Comp-AI-Info-4: GnuPG Public Key at https://logological.org/81A27838.txt X-Comment-1: The moderators do not necessarily agree or disagree with this article. X-Comment-2: Moderators do not verify the accuracy of posted information. X-Comment-3: Acceptance does not convey approval of any external references. X-Robomod: STUMP by ichudov@algebra.com (Igor Chudov), C++/Perl/Unix Consulting X-Moderation-1: See https://savannah.gnu.org/projects/stump X-Spam-Relay-Country: DE US US Authentication-Results: mail2.panix.com; dkim=pass (Good 1024 bit rsa-sha256 signature) header.d=panix.com header.a=rsa-sha256 Authentication-Results: mail2.panix.com; dmarc=pass (p=none dis=none) header.from=panix.com Authentication-Results: mail2.panix.com; spf=fail smtp.mailfrom=panix.com DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=panix.com; s=panix; t=1733323362; bh=szKWTH8anfyvXQZNpoEr5QmkjJ0VZ+ZjLFg+7B6O7lE=; h=To:From:Subject:Date:References:Reply-To; b=NJygs+IU/Jv5PRqsBPzjzv/yQ3XJ+7AGZMnZ8FoN2rMPAqqJMJoHBM0vrdKnFbs+c ry7Y3st5A0lubp7EC0lr+eXgfFcir84YPBZ5AdYag0hwBDnDr2B/6h0C0OxGJoKWRo AhxENoaH82Uxp35vajUZohtKPqqVDUJREct4xLGY= X-Auth: PGPMoose V2.0 PGP comp.ai iQEcBAEBAgAGBQJnUGuIAAoJEKl0hmeBong4QNsH/AovxfrJfmTFueqfW0vrEfU4 7Iz3BW+VHK30ISxb8wsWBraDl19U3g6/Ws5tn3p3z9S2RIjDwiAo5Iz+SmCsnvhN Gw9JRDiY01+lzIk22+HJmMWregvuGI6VCMqjoWAtSwy9Syy2KUrGMTSsOI7V9ArB +14/Rt21xAJvUsWeAGCPBHlBn6hLrub55rrK0q5o2hHNHa8sX60xMzb+iohxiHNV M6u3YGB/Th3apfKQdc8hxkEl/zzAQz7YITLPRVtql09hkumzetjEvkA+KNFEzVMP XlBiy8jBFngFanLKidF9pA+OHZatNk1Kpp7WdJA/PvNYBzevvTFN4DycqRUa9Do= =jyPe Xref: csiph.com comp.ai:336 On 2024-12-02, Tristan Miller wrote: > I'm afraid I was reporting only half-remembered results from the last > time I looked into the question, which would have been months ago. I > did a quick web search just now and came up with a couple queries from > the Kohya's GUI GitHub project that roughly accord with my recollection: > > https://github.com/bmaltais/kohya_ss/discussions/679 > https://github.com/bmaltais/kohya_ss/issues/2632 > > The first of these claims CPU training time of about 40×, and the second > claims 4× for both time and memory. They both refer to Stable Diffusion. > > There's also this LoRA finetuning guide for LLaMA that provides detailed > CPU time and memory metrics for various models: https://rentry.org/cpu-lora Thank you! -- Carl Fink carl@finknetwork.com https://reasonablyliterate.com https://nitpicking.com If you want to make a point, somebody will take the point and stab you with it. -Kenne Estes