<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
		<atom:link href="http://dev1galaxy.org/extern.php?action=feed&amp;tid=7872&amp;type=rss" rel="self" type="application/rss+xml" />
		<title><![CDATA[Dev1 Galaxy Forum / recent Ceres dependency]]></title>
		<link>http://dev1galaxy.org/viewtopic.php?id=7872</link>
		<description><![CDATA[The most recent posts in recent Ceres dependency.]]></description>
		<lastBuildDate>Sun, 29 Mar 2026 02:32:06 +0000</lastBuildDate>
		<generator>FluxBB</generator>
		<item>
			<title><![CDATA[recent Ceres dependency]]></title>
			<link>http://dev1galaxy.org/viewtopic.php?pid=62938#p62938</link>
			<description><![CDATA[<p>Dependency showed last week when upgrading from mpv-0.40.0-3+deb13u1 to mpv-0.41.0-2+b2. The mpv upgrade caused libpipewire to be upgraded which, in turn, caused libspa-0.2-modules to be upgraded. </p><p>Found a problem when checking what was to be installed. Also found a temporary solution: mpv&#039;s dependency on libpipewire is &gt;= 1.0.4, which means upgrading libpipewire (which causes libspa-0.2-modules to be upgraded) wasn&#039;t required. Temporary solution is the older versions of libpipewire and libspa-0.2-modules shown below, remain installed and on hold. </p><p>Pkg versions and new Ceres dependency below. Seems Debian decided users need AI. </p><p>libpipewire-0.3-0t64 1.4.10-1+b1&#160; &gt;&#160; libpipewire-0.3-0t64 1.6.2-1<br />libspa-0.2-modules-1.4.10-1+b1&#160; &#160;&gt;&#160; libspa-0.2-modules-1.6.2-1</p><p>New dependency for libspa-0.2-modules-1.6.2-1 pkg:<br />libonnxruntime1.23</p><p>Purpose and dependencies:<br /><a href="https://packages.debian.org/sid/libonnxruntime1.23" rel="nofollow">https://packages.debian.org/sid/libonnxruntime1.23</a></p><p>From the Homepage:<br /><a href="https://github.com/microsoft/onnxruntime" rel="nofollow">https://github.com/microsoft/onnxruntime</a></p><p>ONNX Runtime is a cross-platform inference and training machine-learning accelerator.</p><p>ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.</p><p>One search result found for onnx at pipewire-org:<br /><a href="https://docs.pipewire.org/devel/page_module_filter_chain.html" rel="nofollow">https://docs.pipewire.org/devel/page_mo … chain.html</a><br />Scroll down to &#039;ONNX filters&#039;<br />&quot;There is an optional ONNX filter available...&quot;</p><p>Open Neural Network Exchange<br /><a href="https://onnx.ai/" rel="nofollow">https://onnx.ai/</a></p><p>ONNX: Train in Any Framework, Deploy on Any Hardware<br />Nov 12, 2025 <br /><a href="https://www.datacamp.com/tutorial/onnx" rel="nofollow">https://www.datacamp.com/tutorial/onnx</a></p>]]></description>
			<author><![CDATA[dummy@example.com (fanderal)]]></author>
			<pubDate>Sun, 29 Mar 2026 02:32:06 +0000</pubDate>
			<guid>http://dev1galaxy.org/viewtopic.php?pid=62938#p62938</guid>
		</item>
	</channel>
</rss>
