Nothing Special   »   [go: up one dir, main page]

March 04, 2025

Deploying FreePBX and Asterisk on a single Ubuntu virtual machine in a public cloud is an ideal solution for personal users and small to medium-sized businesses with voice over IP (VoIP) and fax over IP (FoIP) needs. This setup costs nothing, is scalable and secure, and has daily recovery points with a recovery time measured in minutes. In this article and its companion how-to guide on GitHub, you’ll go through the Day 0 to Day 2 process of provisioning a long-running FreePBX system that is reliable, easy to maintain, and optimized for both voice and fax communications.

While most VoIP providers can handle voice calls effectively, faxing over IP requires special care. Fax transmissions are especially sensitive to packet loss and variable latency (jitter), which can lead to failed faxes or poor-quality transmissions. For reliable faxing, it’s critical to use a provider that officially supports T.38 FoIP, an International Telecommunication Union (ITU-T) standard for sending faxes over IP networks in real-time which has a proven track record of compatibility with a wide range of fax machines and software. T38Fax Incorporated, for example, is a provider specifically designed to address these challenges, ensuring reliable faxing over SIP for all T.38 capable endpoints. Engineers at T38Fax are familiar with FreePBX and Asterisk installations on Ubuntu, and know how to configure and diagnose FoIP pass-through to fax server software, like HylaFAX Enterprise and OpenText RightFax, and analog telephone adapters (ATAs) connected to fax machines.

The evolution of Asterisk and FreePBX

Mark Spencer created the Asterisk software for Linux in 1999, and it significantly slashed the monthly telephone bills for homes and businesses. Homes and businesses originally connected to the public switched telephone network (PSTN) through analog voice lines maintained by regional telecom companies. Asterisk enabled homes and businesses to connect to the PSTN over the internet. VoIP benefited from lower overall transmission costs, each call did not require a dedicated line, and VoIP was initially exempt from all regulatory taxes.

Ward Mundy’s “asterisk@home” web frontend simplified Asterisk management, and was later rebranded as the Asterisk Management Project (AMP), and included FreePBX. Innovations by pioneers like Jeff Pulver were built on open source and proprietary VoIP innovations in the early 2000s, and this collective interest among technologists and consumers sparked a “cut-the-cord” movement. VoIP service market share grew from 4.7% in 1999 to 27% in 2008, according to the FCC’s 2010 “Trends in Telephone Service” report. Many telecom and hardware companies had to evolve, and began offering services and products that catered to Asterisk and FreePBX users.

Why faxing over IP is different

Faxing over IP is not the same as voice over IP. Fax machines are extremely sensitive to network conditions that impact audio quality. Voice calls can tolerate a higher degree of packet loss and latency, but fax transmissions require error-free delivery. The T.38 protocol is designed to compensate for imperfect networks, such as the internet, and improves the reliability of fax transmissions when using VoIP infrastructure.

While the science of T.38 may be sound, and it offers a compelling solution to the challenges presented by packet loss and jitter often experienced in real-world deployments over both public and private networks, the implementation of this technology in many carrier networks has been haphazard at best. It’s no surprise to us that many customers have struggled to achieve reliable fax transmissions using T.38

Darren Nickerson, President of T38Fax

Indeed, not all VoIP providers support T.38, and even among those that do, reliability and compatibility varies wildly. Many providers recommend disabling error correction and/or reducing transmission speeds, or shut these features off at their media gateways. These workarounds mask problems around FoIP reliability, and result in faxes reported as successfully transmitted that are nonetheless illegible to the recipient. Even a properly configured FreePBX system cannot overcome misconfigurations at the provider level. Choosing a provider which specializes in T.38 FoIP ensures compatibility with a wide range of fax machines and software.

Key considerations when deploying FreePBX and Asterisk on Ubuntu

Five areas of focus are at top of mind when provisioning compute and network resources, and installing FreePBX and Asterisk:

  1. Cost-effectiveness: The server deployment must be cheap to deploy and run over time, and the instance should be long-lived. Cost savings and minimal maintenance efforts are strong motivations.
  2. Reliability and performance: Uptime is paramount, the network and the server must both be stable, reliable, and performant.
  3. Infrastructure as Code (IaC): Installations of Asterisk and FreePBX should use version controlled, declarative, and idempotent instructions, limiting arbitrary commands and command injection risks, reducing privilege escalation risks, and implementing validation and error handling.
  4. Predictable upgrades: Software upgrades, especially security and bugfixes, must not break existing configurations. Security patching automations should be enabled for automatic updates at appropriate intervals.
  5. Recovery point objective (RPO) and recovery time objective (RTO): A daily RPO schedule with an RTO measured in minutes: in the event of a failure, the maximum acceptable data loss (RPO) is one day, and the maximum acceptable downtime (RTO) is only a few minutes. This can be achieved using a combination of the FreePBX Backup and Restore Module, and external storage such as an S3 bucket.

Day 0 to Day 2 operations

Bringing up a fully operational FreePBX system requires 3 phases. This article provides a guided journey, answering the “what” and “why” behind the implementation choices in the companion how-to guide on GitHub.

Day 0: Planning

Why Ubuntu? Installing FreePBX 17 and Asterisk 20.6 on Ubuntu 24.04 LTS allows you to benefit from Canonical’s security patching and systems management tools until April 2036, without having to contend with configuration changes associated with major version upgrades.

Why public clouds? Deploying to a public cloud provider with an “Always Free” service tier, is cost-effective, because there is no up front or recurring monthly cost when used within the always free limits. Both clouds provide robust networking for reliable operation and S3 compatible cloud storage for easy backup and recovery within their always free tier.

Both Google Cloud and Oracle Cloud provide a $300 credit. Google’s credit is valid for the first 90 days of every new account, and Oracle’s credit is valid for the first 30 days of every new account. These credits provide a buffer for any unanticipated usage beyond the always free tier.

  • Why Google Cloud? Google Cloud provides 1 GB of outbound data transfer per month, from North America to all region destinations (excluding China and Australia) per month. E-mail delivery services for notifications have to be configured through SendGrid, Mailjet, or other external SMTP service provider. It is possible to sign up for a SendGrid account through Google Cloud which allows sending 12,000 free emails per month. Deploy FreePBX and Asterisk to an Ubuntu virtual machine on Google Cloud’s Always Free Tier using the gcloud utility.
  • Why Oracle Cloud? Oracle Cloud provides virtual machines in the free tier worldwide, and allows 10 TB of outbound data transfer globally, each month. E-mail delivery service is included within the free tier, capped at 100 emails per day, but any external SMTP mail service provider can also be used. Install and configure FreePBX and Asterisk by manually triggering cloud-init modules on an already provisioned Ubuntu virtual machine in Oracle Cloud. It is also possible to deploy and provision FreePBX on an Ubuntu virtual machine using a preconfigured cloud-init.yaml file, through the Oracle Cloud’s web portal. Step by step instructions to use Oracle’s OCI command line utility to provision and deploy FreePBX have not yet been authored, and this could be an excellent Pull Request opportunity for any motivated open source contributors.

Day 1: Deployment
  • Infrastructure as Code (IaC) ensures repeatable, secure, and idempotent installations. The companion how-to guide on GitHub provides a cloud-init YAML configuration file to deploy FreePBX on Ubuntu 24.04 LTS.
  • Configure connections from commonly used VoIP and FoIP providers. Telnyx, Flowroute, BulkVS, and T38Fax cater to customers of various sizes, expertise, and budget. Their services are commonly discussed and reviewed on VoIP-specific forums and blogs. Flowroute and Telnyx provide service for international long-distance calls, and Telnyx can also provision phone numbers for incoming calls internationally. BulkVS and T38Fax do not provide outbound termination for international long-distance calls, and only terminate calls in North America. Rather uniquely, T38Fax is the only provider which does not provide VoIP service. By virtue of being optimized for faxing exclusively, they provide no guarantees that voice calls will connect successfully and only permit FoIP traffic on their network.

Day 2: Operations
  • Validate T.38 configurations: While most VoIP configurations can be validated by monitoring the Asterisk service in the Asterisk CLI, FoIP configurations require deeper analysis to verify. Fax machines communicate over the T.30 protocol, which ideally gets encapsulated into a T.38 data stream, or is transmitted as voice traffic using an audio codec. Asterisk CLI will confirm if faxes are being transmitted via an audio codec or via a T.38 data stream. The presence or absence of T.30’s Error Correction Mode (ECM), like most fax session parameters,  is notoriously difficult to identify natively in Asterisk. Wireshark, a packet capture and analysis tool, can expose attributes of T.38 transmissions, including T.30 ECM error correction. For users who don’t want to go through the steps of analyzing a packet capture, T38Fax has published a free “Got ECM” online testing tool that can check your fax line, and verify if T.38 is configured to support ECM.
  • Ubuntu Pro entitlements for security patching should be enabled. Enable the “esm-infra” entitlement to get security updates for all open source software published in Ubuntu’s “universe” repository, and enable the “esm-main” entitlement to get security updates for the Ubuntu LTS operating system beyond the 5 year standard support window. Ubuntu Pro also includes the Livepatch security patching automation tool to secure the Linux kernel on running systems, without causing downtime.

Day 0: Goals and requirements for your FreePBX system

Why should I install FreePBX on Ubuntu?

When installing FreePBX on Ubuntu 24.04 LTS, it is possible to perform the installation without compiling anything from source. Installing the majority of FreePBX’s software dependencies (like NodeJS, PHP, Asterisk, and Asterisk’s dependencies from Canonical’s official Ubuntu repositories) is a significant departure from every other FreePBX and Asterisk installation guide on the Internet today. This approach results in a FreePBX installation which benefits from Canonical’s security patching automations and systems management tools until April 2036.

Why deploy FreePBX to a single Ubuntu virtual machine?

For individuals and smaller organizations, installing and configuring FreePBX on a single Ubuntu virtual machine is simpler and far more cost-effective than configuring a high-availability cluster with Kamailio or OpenSIPS in front of multiple Asterisk servers. While such solutions exist for enterprises handling thousands of concurrent calls, they are beyond the scope of this article. Instead, this article will focus on:

  • Ease of deployment, maintenance, and recovery
  • Longevity of the deployment
  • Total cost of ownership

Why should I deploy my Ubuntu virtual machine on a public cloud?

The ability to provision exactly the right amount of compute for your FreePBX server, with a streamlined path for upgrading its size, makes any public cloud an excellent deployment target for FreePBX. A virtual machine in a public cloud bypasses the inefficiencies associated with over-allocating resources on a physical server. Installing Ubuntu on bare metal instead of in a cloud VM also introduces issues around single points of failure in the entire stack, extending from the network to within the physical server. Generally, the network connectivity and the networking and server hardware in a public cloud datacenter are typically more robust than on-premise alternatives. Furthermore, virtual machines on all public clouds have snapshot capabilities, and often include preconfigured access to S3-compatible object storage. Google provides 5GB per month of free object storage, and Oracle provides 20GB total free object storage, which can be used for FreePBX backups. Both the VM snapshot and object storage features simplify backup, recovery, and rollback strategies.

Ubuntu equally supports a diverse range of CPU architectures, and FreePBX can be installed on any of them. Google Cloud provides Ubuntu images for both AMD64 and Arm64 architectures at various hardware configuration price points. Oracle includes generously sized Arm64 virtual machines in its always free tier, but they are difficult to reserve due to strong demand.

The Always Free tier of Google Cloud Compute Engine includes one e2-micro Ubuntu virtual machine in either the us-west1, us-central1, or us-east1 United States region, with the following configuration and limits:

  • One 30GB hard disk and 1 GB RAM
  • Two shared Intel Broadwell x86/64 cores
  • One IPv4 and IPv6 address on their premium tier network, with a configurable cloud firewall
  • Unlimited ingress (inbound to the VM) data transfer
  • 1 GB of egress data transfer (outbound from the VMs) to anywhere in the world, except China and Australia

The Always Free tier of Oracle Cloud Infrastructure’s Compute includes two VM.Standard.E2.1.Micro Ubuntu virtual machines in any global Oracle region, with the following configuration and limits:

  • One 50GB hard disk (up to a maximum of 200GB across all free instances) and 1 GB RAM
  • One dedicated AMD EPYC 7551 32-Core Processor core, the machine is not oversubscribed
  • Two IPv4 and IPv6 addresses, with a configurable cloud firewall
  • 0.48 Gbps maximum throughput speed
  • 10 TB of egress data transfer (outbound from the VMs) to anywhere in the world

How do I know if the Google or Oracle free tier VM is sufficient for my needs?

The load average in the FreePBX dashboard shows the average number of processes that are either running or waiting to run on the system in 1-minute, 5-minute, and 15-minute intervals.

When load averages exceed the number of vCPUs, a load of 2 for Google’s e2-micro instance, it is defined as a period of CPU bursting. CPU bursting means the instance is temporarily utilizing more of the physical core’s processing power to handle the increased demand. Google allows CPU bursting for up to 30 seconds before throttling the CPU. CPU bursting may be seen during server maintenance windows, but sustained bursting caused by FreePBX and Asterisk indicates your workload has outgrown the e2-micro machine. Virtual machines can be upgraded in place from e2-micro up to either the e2-small or e2-medium size VMs. Each step up doubles the bursting capacity: e2-small can use 100% of both allocated vCPU for 60 seconds at a time, and the e2-medium can sustain these spikes for 120 seconds, before CPU throttling is imposed. In other words, an e2-small can have a sustained load of 2 for 1 minute without slowing down, and an e2-medium can have a sustained load of 2 for 2 minutes without slowing down.

When load averages consistently exceed 1 on Oracle’s VM.Standard.E2.1.Micro instance, it is time to upgrade. Oracle’s free tier VMs are regular VMs, and not classified as burstable VMs. VMs have dedicated cores, and they are not oversubscribed. Consequently, bursting beyond the allocation is not possible.

Beyond monitoring CPU bursting from within the FreePBX dashboard, Google’s web portal at console.cloud.google.com provides recommendations when an e2-micro instance regularly has degraded CPU performance due to excessive bursting. Google will recommend an in-place upgrade of the virtual machine to a larger size, if it is deemed to be required. Performing an in-place upgrade entails shutting the machine down, changing the machine type from e2-micro to e2-medium or larger, and starting it up again. Oracle provides a comparable experience with Live Migration capabilities at cloud.oracle.com/compute/instances. Both public cloud providers offer the ability to reconfigure hardware when more resources are needed.

Network considerations for FreePBX SIP Trunks and Extensions

IPv4 and IPv6

Asterisk and the SIP and T.38 protocols have first class support for IPv6. While T.38 can work over IPv6, the vast majority of T.38 implementations are heavily reliant on IPv4. The practical reality of the existing infrastructure (and major service providers) operating largely on IPv4 exclusively, means that the public cloud Ubuntu virtual machine cannot be connected via IPv6 exclusively – the VM must also have an IPv4 address. IPv4 comes with its own challenges: traversing a router’s network address translation (NAT) can be complicated, and misconfigurations could result in one-way audio, no audio, or dropped calls.

NAT

Most virtual machines on public clouds have a 1-to-1 static NAT, so the VM’s external IP is static and their port internally is always the same as the external port. The FreePBX external IP address is under Settings > Asterisk SIP Settings > NAT  Settings  > External Address, and this is auto-configured by default during the initial FreePBX installation. VoIP and FoIP SIP trunk providers also advertise their correct IP address and port, so the connection between a cloud-hosted FreePBX server and a SIP trunk provider is straightforward.

NAT-related complexity typically arises with SIP endpoints such as ATAs and softphones, registered to FreePBX Extensions. Small and medium-sized business (SMB) and small office home office (SoHo) routers and firewalls such as OpnSense, pfSense, Cisco Meraki, Ubiquiti UniFi, and mainstream mesh routers will often randomize the port number used on the external side of the NAT, and also only allocate the NAT mapping ephemerally. For UDP traffic this allocation is usually in the range of 30-300 seconds. Manufacturers further must employ one of several NAT security restrictions, usually (but not exclusively) grouped into: Symmetric NAT, Restricted Cone NAT,  Port Restricted Cone NAT, or Full Cone NAT.

There are a number of NAT traversal technologies hosted by cloud providers to assist in reaching endpoints behind NAT: Session Traversal Utilities for NAT (STUN), Traversal Using Relays around NAT (TURN), Interactive Connectivity Establishment (ICE), and others. Of these, STUN is relatively common to encounter as some endpoints will have a STUN server configured by default, but it cannot assist in traversing highly restrictive NATs such as Symmetric NAT. 

Routers may also support various features to help endpoint devices in traversing their NAT, though these have limitations as well. Firewalls often have a SIP ALG (Application Layer Gateway) enabled by default that tries to modify SIP traffic to better traverse their NAT. In concept SIP ALGs sound useful, but vary widely in their coverage of the entire SIP protocol and sometimes need to be bypassed or completely disabled.  Protocols such as UPnP, PCP, and NAT-PMP that allows endpoints devices to communicate with their router to discover the endpoint’s own NAT mappings. However, due to security concerns, these protocols are often disabled in these deployments by users, administrators, or manufacturers that prioritize network security over convenience — and are not widely used in the VoIP industry. 

The variety, and also the compounding of these security restrictions create challenges for VoIP and FoIP devices and software. For example, highly secure OPNSense and pfSense deployments are more likely to have Symmetric NAT with UPnP-type services disabled by default. While secure, this also meant that VoIP implementers had to make extensive changes to a site’s firewall to get VoIP traffic flowing properly.

The Asterisk team has developed several NAT traversal strategies within Asterisk’s VoIP protocol implementations, in response to these difficulties. Their solutions competently solved problems that third party solutions failed to adequately address. The SIP protocol and media streams flow over separate IPs and ports, so NAT specific configurations for each must be applied separately. FreePBX enables most of these NAT traversal enhancements by default.

NAT traversal mechanisms for SIP traffic:

  • Rewrite Contact
    Default: On (Recommended)
    The SIP contact is recorded when an endpoint registers to an extension. This is the address to which you send new requests (like SIP INVITEs to start a call). It records the IP/port the registration came from, not the one that the endpoint (which might not know it’s behind a NAT) advertises.
  • Force rport
    Default: On (Recommended)
    Very similar to Rewrite Contact, but helping with responses. Requests come with routing headers to indicate the path a response should take. The rport tag on these headers tells middleware that the IP/port on which the request was received should take precedence over the IP/port recorded in the SIP message. This setting tells FreePBX to follow this behavior even when the endpoint doesn’t supply the rport tag.
  • Qualify Frequency:
    Default: 60 (Use: 25)
    Determines how often Asterisk will send out a SIP OPTIONS keepalive packet to keep a NAT session active. If the NAT mapping expires due to inactivity, the endpoint will be unreachable until it sends out new traffic, and even then it will probably pull a new port. The default setting for this is 60, but since some firewalls can time out their NAT mappings for UDP ports in only 30 seconds, 25 seconds may work better in some deployments.

NAT traversal mechanisms for the media streams:

  • Direct Media
    Default: Off (Recommended)
    Disabling this setting puts Asterisk in the middle of all media streams. This is necessary for your PBX to assist in NAT traversal, otherwise you will have to rely on the NAT traversal policies of your carrier(s). Note that Asterisk always proxies T.38 traffic, regardless of this setting.
  • RTP Symmetric
    Default: On (Recommended)
    Enabling symmetric RTP instructs Asterisk to send RTP traffic back to the IP/port you receive media from. Symmetric RTP only works if Direct Media is off.

Lastly, a site’s firewall may have SIP ALG enabled, which may interfere with your VoIP calls. You can avoid some SIP ALGs by changing the port on one or both sides of the connection from using the default UDP port 5060 or using TLS to encrypt the SIP traffic between your PBX and the endpoint. Other situations will require you to disable SIP ALG in the firewall.

Day 1: Deploy, install, and configure FreePBX, Asterisk, and Ubuntu

Infrastructure as Code

It is possible to deploy FreePBX 17 and Asterisk 20.6 on Ubuntu 24.04 LTS using modern Infrastructure as Code (IaC) best practices, using a single version controlled, declarative, and idempotent cloud-init YAML configuration file. This customizable cloud-init.yaml file deploys FreePBX 17 and Asterisk 20.6 on Ubuntu 24.04 LTS. This cloud-init.yaml file includes the bare minimum software required for a fully functional FreePBX and Asterisk installation. This lean initial installation can be expanded to include additional FreePBX modules, which can be installed from within the Module Admin section of the FreePBX web portal. Alternatively, the cloud-init configuration can be updated to install additional FreePBX Modules during the provisioning step, by adding the desired module names to Line 250 of the cloud-init.yaml file.

Public cloud providers support virtual machine creation through their web portal, and through command line applications. cloud-init YAML configuration files can be used in the web portals and command line utilities of every major public cloud. Version-controlling cloud-init.yaml configuration files can help keep them organized. Updating Line 42 to point to your own FreePBX backup results in a nearly instant restore-from-backup solution to create clones of a FreePBX machine, which can be useful in the context of disaster recovery.

Installing FreePBX and Asterisk with cloud-init ensures the installation and configuration is event driven, and the installation does not fail due to race conditions. There is guaranteed idempotence when using cloud-init’s modules, and cloud-init’s declarative configurations provide an improved security posture over shell scripts that are downloaded from the Internet and run as root.

Configuring connections

This “configuring connections” section assumes:

  • successful completion of the installation of FreePBX on Ubuntu 24.04 LTS, as per the how-to guide published at https://github.com/rajannpatel/ubuntupbx, and access to the FreePBX web portal
  • or, deep familiarity with FreePBX configurations and settings.
Connectivity > Trunks

As a convenience, the cloud-init.yaml file contains a link to a FreePBX Core Module backup, which contains Trunk and Outbound Routes configurations for several VoIP and FoIP SIP trunk providers. Enabling these trunks will require setting the boolean value for each desired SIP trunk provider in cloud-init.yaml to true, between Lines 45 and 48.

If you are using Flowroute, edit the Flowroute trunk, and under the “Dialed Number Manipulation Rules” tab, set the value for the Outbound Dial Prefix to match the Tech Prefix with an asterisk appended at the end. The Tech Prefix is a nine-digit numerical string which can be found in the Account Profile section of the Flowroute web portal. The value of the Outbound Dial Prefix will be in XXXXXXXXX* format. For example, if your Tech Prefix is “123412345” then the Outbound Dial Prefix should be: 123412345*

If you are using Telnyx, edit the Telnyx trunk, and under the “Dialed Number Manipulation Rules” tab, the value for the Outbound Dial Prefix must be the Tech Prefix from the Telnyx web portal. Within the Telnyx web portal, navigate to Voice > SIP Trunking, and Create SIP Connection. The option to specify a Tech Prefix is only available when using IP Address authentication.

No changes need to be made for the BulkVS and T38Fax trunks in FreePBX, they work out of the box. Log into customer portals at T38Fax and BulkVS and provide your IP address, so that incoming and outgoing calls can work. In the BulkVS customer portal, configure a BulkVS Trunk Group with incoming calls using “11 digits” delivery. Incoming calls formatted with E164 or 10 digits will not work with the default configurations in FreePBX.

Using 400 for the T38FaxMaxDatagram value

Under the T38Fax Trunk’s pjsip Settings > Advanced tab, T.38 has been enabled for FoIP. A conservative T38FaxMaxDatagram value of 400 has been set for this SIP trunk. While increasing this value provides limited to no additional value, it is interesting to understand what the maximum value could be. The T38FaxMaxDatagram parameter defines the maximum size of the data packets sent between fax endpoints and FoIP SIP trunks. Over the years, this value has been interpreted differently by different vendors.

Virtual machines in Google Cloud have a default MTU of 1460 bytes, and while the MTU is configurable on Google Cloud to be any value between 1300 bytes and 8896 bytes, changing the MTU isn’t necessary. Assuming 20 bytes of IP overhead, 8 bytes of UDP overhead, and 40 bytes of T.38 overhead, an estimate of 68 bytes for overhead is reasonable. The payload space is 1460 (MTU) – 68 (Overhead) = 1392 bytes. Setting T38FaxMaxDatagram to 400 is generously below the upper limit of 1392 bytes, when counting backwards from the default MTU of 1460 bytes. The preconfigured T38FaxMaxDatagram value of 400 is an excellent value for guaranteed operation with the widest range of T.38 capable hardware, in almost any network.

N11 Trunks

There are several preconfigured Custom Trunks for 211, 311, 411, 511, 711, 811, 988, and 911/922/933 with custom dial strings. The custom dial strings map the three-digit phone numbers (also known as service codes or N11 codes) to their 10-digit North American phone numbers. Many services like 211, 311, 511, and 811 vary by locality. An example would be the 311 service, which must route to +1 704 638-5246 in Salisbury, NC, but to +1 704 336-7600 in Charlotte, NC. Create a trunk for each service code to map the service code with its appropriate regional service phone number.

Ensuring the extensions reach their correct regional service phone number, when dialing the three-digit service code, is addressed in the Outbound Routes configuration. N11 Outbound Routes, especially 911 emergency services, should use SIP trunks intended for voice traffic, and not over a FoIP only SIP trunk, such as the one provided by T38Fax.

Outbound Routes Settings

Each Outbound Route in FreePBX can be configured to match a specific Trunk. The Outbound Routes are evaluated in top-down order. Outbound Routes can be selected based on Caller ID, or by prepending a unique dial string which matches a particular Outbound Route. 

“Caller ID” match patterns in Outbound Routes will be evaluated against the FreePBX Extension Number, and not necessarily the fax extension number’s configured outbound caller ID value. “X” matches any digit from 0-9, “Z” matches any digit from 1-9, and “N” matches any digit from 2-9. Setting the “Caller ID” match pattern in the “t38fax” Outbound Route to “999Z” will match any FreePBX Extension numbered between 9991 and 9999. This allows fax extensions to make outbound calls using the T38Fax trunk exclusively, and voice traffic is routed through voice carriers.

Extension and Ring Groups Settings

Navigating to Connectivity > Extensions and clicking the Add Extension > Add New SIP [chan_pjsip] Extension button creates a SIP account in FreePBX, where the extension number is the username. It is possible to connect ATAs or softphones to Extensions.

Configurations for faxing under the Extension’s “Advanced” tab

For fax machine extensions, under Advanced, edit the Direct Media setting to reflect “No”. If Direct Media is enabled, the RTP portion of the call bypasses Asterisk, but the T.38 re-INVITE forces the Asterisk server back into the middle of the UDPTL media stream. This additional complication provides no benefit.

Fax software like iFax’s HylaFax and T38FaxVoIP’s Fax VoIP FSP Windows Fax Service Provider support elastic SIP trunks. An elastic SIP trunk can support multiple concurrent calls on a single SIP registration. In FreePBX, under the Extension’s “Advanced” tab, the Outbound Concurrency Limit is set to 3 by default. That number can be raised to reflect however many simultaneous outbound faxes that extension needs to be able to send.

While Asterisk by default suppresses Call Waiting tones when a T.38 fax session is active, disabling the Call Waiting feature entirely from within the Advanced tab is a prudent choice.

Applications > Ring Groups

For fax softphones and fax server software that support multiple SIP registrations, create a unique extension for however many concurrent incoming fax streams the system needs to support. If the maximum number of concurrent incoming faxes is 8, create 8 extensions. All the extensions can be added to a Ring Group, set the “Ring Strategy” drop down selection to “firstavailable”. The “Destination if no answer” setting should be Terminate Call > Busy. Fax machines will reattempt sending a fax at a later time by default, if they receive a busy signal.

Connectivity > Inbound Routes

When adding Inbound Routes, the DID Number should always be 11 digits, assuming a North American phone number is being used. Specify the country code “1”, followed by the 10-digit phone number (for example: 17182222222)

Admin > Backup & Restore

Adding a regularly scheduled backup of all FreePBX Modules except the Framework Module is advisable. The backup without the Framework Module is useful for migrating to newer installations of FreePBX. Adding a regularly scheduled backup of all FreePBX Modules, including the Framework Module, is useful for disaster recovery, when the target system is running the same major FreePBX version.

The Storage Location is set to “_ASTSPOOLDIR_/backup”, which resolves to “/var/spool/asterisk/backup” on Ubuntu. The “backup” folder has been created in that spool directory at installation time, via cloud-init. The Delete After Runs setting is set to 1, and Delete After Days is set to Unlimited. Given these constraints, only 1 backup will be kept at any given moment.

FreePBX only natively connects with AWS S3 Buckets. However, virtual machines running on Google Cloud have the gcloud utility preinstalled, and they can access S3 Buckets on Google Cloud within the same project. It is possible to copy the contents of /var/spool/asterisk/backup on a daily basis to an S3 bucket in Google Cloud, and use the S3 bucket’s object lifecycle management rules to retire backups older than a certain age, automatically.

Copying the latest backup(s) can be achieved by running `crontab -e` and adding this line to the crontab:

@daily gcloud storage rsync /var/spool/asterisk/backup gs://example-s3-bucket/backup --recursive

Due to a bug in the Backup & Restore FreePBX Module version 17.0.5.62, restoring the Backup & Restore Module configurations from a backup does not restore the crontab entries for the “asterisk” user. Any scheduled, recurring backups would have to be edited and re-saved within the FreePBX portal to generate the appropriate crontab entries for the “asterisk” user.

To deploy a new Ubuntu virtual machine in Google Cloud and restore a backup from your Google Cloud Storage S3 bucket, edit the cloud-init.yaml file and change line 43:

{% set RESTORE_BACKUP = 'https://github.com/rajannpatel/ubuntupbx/raw/refs/heads/main/ubuntupbx.core.backup.tar.gz' %}

Instead of downloading the ubuntupbx.core.backup.tar.gz from GitHub, provide the address of your backup file in Google Cloud Storage. Remember to double-check the external IP address configuration on the target system when restoring from a backup, incorrect external IP address configurations will cause no audio issues on extensions and trunks. The FreePBX external IP address is set under Settings > Asterisk SIP Settings > NAT  Settings  > External Address.

Day 2: Maintain operational status and security

Validate optimal T.38 configurations

Got ECM?

T38Fax provides a free “Got ECM” fax-back testing tool that can check your incoming and outgoing faxes to verify if your fax connection has T.30 ECM enabled. Submit the form with your email address and fax number. The “Got ECM” service will email you a copy of a 1 page fax that is sent to your fax endpoint, confirm if T.30 ECM was enabled, and provide other useful information including: Quality (as a scalar value), Page Width, Page Length, Signal Rate, Data Format, and the total number of attempts required to complete the test.

This information can also be validated with packet captures.

Fax over IP Protocol Inspection

Packet captures can be performed on the Ubuntu server running Asterisk. Performing packet captures here provides insight into what is happening with the SIP trunk to the carrier, and also provides insight into the communication between Asterisk and the fax machine. Alternatively, Wireshark can be used to inspect packet capture (PCAP) dumps on the network where the fax machine is running. Despite not having insights into the traffic between Asterisk and SIP Trunk, there is still enough data to validate the T.38 protocol is in use, and that the fax transmission is via UDPTL, and not via G711u encoded RTP packets. The PCAP can also be used to verify if error correction in fax transmissions is comprehensively enabled.

When T.38 is in use, reliable fax transmissions require inbound and outbound faxes to use network and application error correction features:

  • T.38 UDP Redundancy Error Correction (UDP EC)
  • T.38 Forward Error Correction (FEC)
  • T.30 Error Correction Mode (ECM)

FEC and UDP redundancy function at the network layer, while ECM works at the application layer. Both FEC and UDP redundancy operate at the network layer and address the problem of packet loss during transmission, by sending duplicates of the data. ECM detects errors after transmission of each page, and requests block retransmission only for corrupted segments of that page before moving on to the next page. This is how ECM ensures the received fax image is identical to the one that was sent, thereby ensuring100% error-free delivery.

T.38 Error Correction

The T.38 re-INVITE step of a call ladder is the responsibility of the recipient of the fax. For outgoing calls, when fax tones are present carriers like T38Fax will re-INVITE the sender. For incoming calls, the T.38 endpoint receiving the call is responsible for sending the re-INVITE. This is the only way to avoid mid-air collisions where both sides re-INVITE simultaneously, a situation that can sometimes prevent T.38 negotiation, resulting in either a fallback to the G.711 audio codec, or a complete failure. The sender of the T.38 re-INVITE chooses between FEC and UDP redundancy for the entire T.38 session – and this error correction is used during handshake, training, and image transmission – but the de-facto standard today is UDP redundancy.

UDP redundancy is usually tuned with different values for high-speed (HS) and low-speed (LS) signaling. LS signaling is used for inter-page procedures such as handshake and training, that must make it across at all costs. HS signaling is used for image transmission, comprising the vast majority of the packets. When configurable, T38Fax recommends 5 for LS and 2 for HS when faxing over their ECM-enabled network. High redundancy for page data would be overkill with ECM correcting any page data errors at the end of page (EOP) procedures.

Every carrier mentioned in this article defaults to UDP redundancy, and none default to using FEC. The complexity FEC introduces does not justify its efficiency, especially if one leg of a transmission is using FEC and another leg of the transmission is using UDP redundancy. Morgan Scarafiotti from the Support Engineering team at T38Fax weighs in, “I think it’s worth contextualizing FEC vs redundancy with some back-of-the-napkin estimation. A standard G.711u or G.711a call consumes a guaranteed 64kbps on the payload alone. Even if you use 3x redundancy, so 4 total copies of the data, you’re looking at about 14,400bps*4 = ~57,600bps in the payload at max speed. Compared to a standard G.711codec you’re still saving a little bit of data even with quite a lot of redundancy. So, it’s true that FEC consumes even less bandwidth, maybe half of that, but redundancy doesn’t consume a whole lot of data to begin with.”

FEC sends extra data in subsequent packets that contain information derived from the previous data packets. The receiver can use these FEC packets to reconstruct lost or damaged data. FEC requires more encoding and decoding, but requires less network overhead than UDP redundancy. With UDP redundancy, retransmission of the IFP (Internet Facsimile Protocol) packets are included in subsequent IP packets destined to the fax recipient. The retransmitted data is verified against what was previously received. In theory, this implementation makes the fax transmission more resilient against failure from packet loss. In practice, it could quadruple the transmission overhead, and there are interop problems due to insufficient testing on the rare occasions it is used.

Network error correction features can be validated by filtering the PCAP by “sdp” to find the “Status: 200 OK (INVITE)” frame, and drilling into Session Initiation Protocol (200) > Message Body > Session Description Protocol in the packet details pane, and finding “a=T38FaxUdpEC:t38UDPRedundancy”

T.30 Error Correction

FoIP transmissions with ECM disabled may result in image artifacts, such as long black horizontal lines, or lines of text completely missing, leading to an arguably false-positive “successful transmission” confirmation when the fax call ends. Enabling ECM can extend the duration of fax transmissions by a small margin of a few seconds per page, because errors in the received image are addressed by retransmitting the original page data. VoIP SIP trunk providers may not offer T.38 support at all, but when T.38 is implemented, T.30 ECM is often missing.

When inspecting a packet capture with Wireshark, validating ECM requires filtering by “t30.fif.ecm”. This filter will reveal 2 frames, the “Digital Identification Signal (DIS)” and the “Digital Command Signal (DCS)” frames. In the DCS frame, drill into ITU-T Recommendation T.38 > UDPTLPacket > primary-ifp-packet > data-field > Item 0 > Data-Field > ITU-T Recommendation T.30 > Facsimile Control: Digital Command Signal. When “Error Correction Mode: Set”  has a 1 in the third bit, as shown in the highlighted screenshot below, it is proof that T.30 ECM is enabled.

Manual and automated patch management on Linux

Almost every other guide directs users to compile Asterisk from source. However, it is challenging to maintain Linux systems with packages compiled from source. When some packages are self-compiled, and others are installed from package managers, there is a heightened risk of dependency conflicts and version mismatches. A customized process is required to apply security updates in an application-specific manner; these nonstandard package installations represent snowflakes of management complexity in a Linux estate. Self-compiled software often relies on specific versions of libraries, and these versions might conflict with the versions required by packages installed through the system’s package manager. 

Resolving these conflicts can be a complex and time-consuming process. Packages installed through the system’s package manager are typically tracked and updated automatically.  When software is compiled from source, the user is responsible for tracking and applying security patches. This is a significant burden and increases the risk of running vulnerable software, especially when the path of least resistance is to avoid updating the self-compiled software.

Security patching all open source software installed on Ubuntu

Canonical has a 20-year track record of timely security updates for the main Ubuntu OS, with critical CVEs patched in less than 24 hours on average. Ubuntu Pro expands this coverage to include software installed from the universe repository. Patches are applied for critical, high, and selected medium CVEs, with many zero-day vulnerabilities fixed under embargo for release the moment the CVE is public.

Ubuntu Pro provides security coverage for almost 30,000 open source software titles, which include over 100,000 open source software packages. Asterisk and all its dependencies are included in the official Ubuntu repositories, and are covered with Ubuntu Pro. This open source software can be installed on Ubuntu with apt or apt-get. The major version of any software package installed from the official Ubuntu repositories is pinned for every Ubuntu LTS version, but the minor version number for each software package is incremented every time Canonical packages and publishes bugfixes and security updates for these major versions. These security patches may come from the upstream publisher of the open source software, or they may be contributed by Canonical to the upstream publisher.

Just like every Ubuntu LTS version, all packages installed via apt or apt-get from Canonical’s repositories or their mirrors also get 10 years of security patches when a free or paid Ubuntu Pro token is attached to the machine. If you do not launch a premium Ubuntu Pro image on public cloud, you can attach your Ubuntu Pro token on your Ubuntu 24.04 LTS machine by specifying it in cloud-init.yaml. When installing FreePBX on Ubuntu 24.04 LTS, the security patching benefits are realized immediately for every FreePBX dependency installed from Canonical’s Ubuntu repositories, or its mirrors. Ubuntu Pro enables the entitlement for both the “esm-apps” and “esm-infra” security repositories, at the time of this writing they already contain security patches for the following Asterisk dependencies:

  • libcjson1
  • libavdevice60
  • ffmpeg
  • libpostproc57
  • libavcodec60
  • libavutil58
  • libswscale7
  • libswresample4
  • libavformat60
  • libavfilter9

As Canonical publishes more security patches for additional vulnerabilities, as they are found, this list will continue to grow. On an Ubuntu 24.04 LTS instance without Ubuntu Pro, these security updates are not available:

ubuntu@pbx:~$ sudo apt update && sudo apt upgrade
Hit:1 http://us-east1.gce.archive.ubuntu.com/ubuntu noble InRelease
Hit:2 http://us-east1.gce.archive.ubuntu.com/ubuntu noble-updates InRelease                   
Hit:3 http://us-east1.gce.archive.ubuntu.com/ubuntu noble-backports InRelease                 
Hit:4 http://security.ubuntu.com/ubuntu noble-security InRelease                              
Hit:5 https://ppa.launchpadcontent.net/ondrej/php/ubuntu noble InRelease                      
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
All packages are up to date.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
Get more security updates through Ubuntu Pro with 'esm-apps' enabled:
  libcjson1 libavdevice60 ffmpeg libpostproc57 libavcodec60 libavutil58
  libswscale7 libswresample4 libavformat60 libavfilter9
Learn more about Ubuntu Pro on GCP at https://ubuntu.com/gcp/pro
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
ubuntu@pbx:~$ 

Not all open source software maintainers will maintain their software for 10 years. For example, Sangoma security patches Asterisk LTS versions for four years. As an alternative to downloading the Asterisk tarball from Sangoma’s website and manually installing it on Ubuntu, performing apt install asterisk on Ubuntu 24.04 LTS results in an installation of Asterisk version 20. Ubuntu 24.04 LTS will always only ever contain Asterisk version 20 in its “universe” repository. However, Asterisk version 20 in the Ubuntu 24.04 LTS universe repository gets in-place security patches for 10 years, despite going end-of-life after four years. How does Canonical provide security patching for Asterisk 20 until 2036, beyond Sangoma’s 2027 end-of-life (EOL) date for Asterisk 20? Well, it’s simple: Canonical backports security patches from Sangoma’s currently maintained version of Asterisk. Similarly, Canonical authors its own security patches for all open source software it packages and publishes in its repositories, beyond Asterisk. Even when upstream software maintainers have marked versions of software included in an Ubuntu LTS release as end-of-life, Canonical ensures that every version of Ubuntu LTS and the software made available for installation on it, is safe and stable for a decade.

Anybody running Ubuntu 24.04 LTS with an Ubuntu Pro subscription (free or paid) will get security updates for packages installed from Canonical repositories until April 2036. Even when the stewards of Asterisk, NodeJS, and other open source software you rely on have shifted focus to newer versions of their software, you can rely on Canonical to provide security updates for what you have deployed on Ubuntu 24.04 LTS until April 2036, and longer for customers to purchase the +2 year Legacy Support add-on to Ubuntu Pro, as the 10-year security coverage window for Ubuntu Pro comes close to elapsing.

Security patching automations for the Linux kernel

Livepatch shrinks the exploit window for critical and high severity Linux kernel vulnerabilities, by patching the Linux kernel between security maintenance windows. While updates to the kernel installed via apt or apt-get require a reboot to take effect, Livepatch security patches do not require a reboot. Livepatch security patches exist in memory, and are forgotten upon reboot. At startup, Livepatch reevaluates the kernel version and reapplies the appropriate live kernel patches. High and critical vulnerabilities need to be addressed immediately, but software like Asterisk introduces constraints around when security patching related reboots can occur. However, nobody wants their faxes or voice calls dropped midway due to impromptu security patching. To prevent such a scenario, Livepatch is an interim solution for patching high and critical severity security vulnerabilities in the kernel, in reboot-sensitive environments.

Livepatch requires an upgrade and reboot every 13 months when running general availability (GA) kernels, and every 9 months when running hardware enablement (HWE) kernels. Ubuntu Desktop and public cloud virtual machines run HWE kernels by default, whereas Ubuntu Server installs the GA kernel by default. Canonical does not perform testing on cumulative live kernel patches beyond 13- and 9-month windows respectively, for each kernel; consequently, there is no security patching available via Livepatch beyond that range.

Reconciling the desire to always have up-to-date security patches, and having the fewest predictable security patching upgrade and reboot intervals, results in a bi-annual security patching cadence. Upgrade and reboot the machine after Canonical has published the latest kernels, in the spring and fall. Enable Livepatch to provide protection against critical and high vulnerabilities in between those windows.

Livepatch is enabled by default when deploying FreePBX with the cloud-init file from the companion how-to guide in GitHub.

Recommended security patching cadences for FreePBX and Asterisk on Ubuntu

It is prudent to differentiate between bugfix patches and security patches. FreePBX supports this distinction for their own modules under Admin > Module Admin > Scheduler and Alerts:

When setting Automatic Module Updates to Email Only, the onus is on the system administrator to remember to apply the non-security software updates manually. With Automatic Module Security Updates enabled, they will be applied between midnight and 4AM daily. Through cloud-init.yaml, the unattended-upgrades and needrestart packages have been configured to install security updates daily at 4:10 AM, with a reboot (if necessary) at 4:30 AM.

# TIME TO INSTALL AND REBOOT UBUNTU FOR SECURITY PATCHES FROM CANONICAL IN XX:XX FORMAT
{% set SECURITY_INSTALL_TIME = "04:10" %}
{% set SECURITY_REBOOT_TIME = "04:30" %}

To conform with the smallest risk appetite, apply security updates daily, with a nightly reboot.

To manually apply any FreePBX Module updates, visit Admin > Module Admin > Module Updates within the FreePBX web portal, click the “Check Online” button, add or remove the modules that need updating, and click “Process” to manually trigger the update process.

Scheduled disk cleanup

It’s crucial to manage disk usage effectively on a long-running FreePBX deployment. Several factors can contribute to disk filling, including accumulating voicemails, call recordings (often used for training), verbose logging (common during configuration), and the growth of Call Detail Records (CDR) and Call Event Logging (CEL) databases.  These potential issues are mitigated by the following disk space management strategies implemented in the cloud-init.yaml file:

# NUMBER OF DAYS TO RETAIN CDR AND CEL RECORDS IN FREEPBX
{% set CDR_RETENTION_DAYS = "60" %}
{% set CEL_RETENTION_DAYS = "60" %}

These configurations ensure Call Detail Records and Call Event Logs are emptied every 60 days.

Additionally, cloud-init.yaml sets crontab entries to prune call recordings (if enabled) and voicemails older than 45 days. Logrotate is configured to rotate logs up to 3 times on a daily and weekly basis.

Linux systems management with Landscape

Landscape is a Linux systems management tool designed to help you manage, monitor, secure, and inventory all aspects of your Ubuntu instances from a unified platform. Landscape is included with Ubuntu Pro, even at the free tier. Landscape provides system administration and security patching capabilities for Ubuntu via a web portal and API.

Landscape uses a client-server architecture, where a central Landscape Server manages an Ubuntu fleet. Launch an interactive wizard to connect Landscape Client with Landscape Server with Pro Client on Ubuntu 24.04 LTS:

$ pro enable landscape

Final thoughts

Deploying FreePBX and Asterisk on Ubuntu, particularly within a public cloud environment, offers a robust, cost-effective, and scalable solution for personal users and small to medium-sized businesses.

We’re thrilled to see fax-specific configuration tips and best practices in a FreePBX deployment guide for Ubuntu. In a fast-moving, increasingly voice-centric industry Canonical has given its users powerful tools to build a long-lived platform that is optimized for reliable voice and fax transmissions, and we are pleased to recommend Ubuntu to our customers.

Darren Nickerson, President of T38Fax

By leveraging modern Infrastructure as Code practices, automated security patching, and reliable disaster recovery strategies, this setup ensures long-term stability, security, and performance. The integration of Ubuntu Pro further enhances security by providing extended support for critical open source software dependencies, closing exploit windows between vulnerability detection and scheduled security patching intervals with Livepatch, and simplifying Linux systems management and monitoring with Landscape.

These deployment choices not only address the historical challenges of Asterisk and FreePBX maintenance, but also align with contemporary best practices for reliability, cost-efficiency, and security. Whether you’re managing a small office or scaling up for larger operations, this guide provides a comprehensive framework for deploying and maintaining a resilient FreePBX and Asterisk based VoIP and FoIP system in 2025.

Talk to an expert

Ubuntu popularized the long term support (LTS) model for desktop and server Linux users. Talk to us about cloud-init, Ubuntu Pro, Livepatch, and Landscape for your long-running Linux workloads.

Contact Us

on March 04, 2025 02:35 PM

Canonical and MediaTek enhance reliability, accelerate market entry and reduce Total Cost of Ownership (TCO) for ODMs through Ubuntu Certified Hardware and Arm SystemReady programs 

The hardware ecosystem is evolving rapidly, presenting a continuous challenge in ensuring that new hardware is market-ready and meets software and security standards. Forward-looking hardware vendors recognise that certification and compliance to standards ensure product reliability, compatibility, and accelerated time to market 

Canonical and MediaTek underscore the essential role of certification and compliance in driving technological innovation and operational excellence. By achieving Ubuntu Certification and Arm’s SystemReady™ compliance, MediaTek has positioned Genio 1200 as a benchmark for industry standards and reliability. This achievement enhances its appeal for ODMs seeking product reliability and reduced TCO. The Advantech RSB-3810, powered by MediaTek Genio 1200 and certified on Ubuntu 22.04 LTS, exemplifies the practical benefits of both programs, showing how products can meet rigorous quality and reliability standards to satisfy market demands. 

Arm’s role in setting industry standards

The Arm SystemReady program helps ensure the interoperability of an operating system on Arm-based hardware. It unites the ecosystem on a common foundation and enables everyone to focus on differentiation and innovation. Developers can build software once and deploy it on any compliant hardware. SystemReady Devicetree targets Devicetree based Linux only distributions on Arm-based systems.  It optimizes install and boot for embedded systems, and includes support for secure firmware over-the-air (OTA) updates and Unified Extensible Firmware Interface (UEFI) secure boot. 

“To address new edge AI use cases, there is a growing demand for high-performance, power-efficient IoT devices, making it more important than ever to have standards in place that improve the developer experience, ensure interoperability, reliability and drive innovation,” said Paul Williamson, SVP and GM, IoT Line of Business at Arm. “SystemReady helps the industry to standardize where it matters and provides a reliable framework for Canonical to offer Ubuntu for performance and long term support. MediaTek’s achievement will ensure developers can create better, more secure, AI experiences for end users.”

By setting standard interfaces between the firmware and operating system, SystemReady streamlines the process for achieving Ubuntu certification, which focuses on the broader scope of operating system functionality and overall performance. 

Ubuntu Certified Hardware: more than compliance — a commitment to excellence

Ubuntu certification ensures that devices not only meet performance standards but also excel in functionality, providing the optimal Ubuntu experience right out of the box. This cohesive process benefits ODMs by simplifying solution implementation and reducing development and support costs. Certified devices are thoroughly tested for enhanced reliability and performance. Organisations can combine these benefits with Ubuntu Pro for Devices to get security updates for up to 12  years. The Ubuntu Pro subscription also provides fleet management with Landscape and compliance with established security baselines and standards, ensuring devices meet the high expectations of end customers. 

“Our Certified Hardware Programme affirms our commitment to providing unmatched reliability and performance,” stated Olivier Philippe, VP Devices Engineering at Canonical. “Our certification of the MediaTek Genio 1200 and collaboration with Arm open new markets and present state-of-the-art performance/cost ratios to our partners, reinforcing the value of our precise and rigorous integration and testing capabilities.”

MediaTek’s achievement of these certifications establishes its SoC as a premier choice for ODMs seeking to meet and exceed industry standards, ensuring compliance and providing a competitive edge in the market.

“By certifying the MediaTek Genio 1200 for Ubuntu and SystemReady, we are highlighting our commitment to providing the most optimised computing experience for users,” said CK Wang, General Manager of the IoT Business Unit at MediaTek. “This milestone places us at the forefront of the industry and opens new market opportunities, ensuring that our products deliver superior performance and reliability.”

Together, these certifications form a robust foundation that enhances interoperability and ensures optimised performance, clearly defining their complementary roles in advancing the tech industry.

Canonical’s Partner Programs: driving value and innovation

Canonical’s partner programs for Independent Hardware Vendors (IHV) and Original Equipment Manufacturers (OEM) enhance the value offered to partners, streamlining product development cycles and market entry. Participation in these programs grants access to certified SoCs and devices, ensuring that partners are building their products using reliable components with software that is continuously updated, and maintained by Canonical. This demonstrates a shared commitment to align our partners’ solutions with the European Cyber Resilience Act (EU CRA) and other global security regulations and standards. 

Join the Canonical silicon program

Learn more about Ubuntu certified hardware

Contact us

on March 04, 2025 02:31 PM

March 03, 2025

Announcing Incus 6.10

Stéphane Graber

The Incus team is pleased to announce the release of Incus 6.10!

This release brings in an easier way to run Incus on a valid HTTPS certificate, a new way to send through provisioning data to VMs, a very welcome API enhancement and much more!

The highlights for this release are:

  • ACME DNS-01 validation (Let’s Encrypt)
  • API wide filtering support
  • Support for SMBIOS11 provisioning in VMs
  • IOMMU support in VMs
  • VRF support for routed NICs
  • Creating profiles in a project through preseed
  • LZ4 support for backups and images

NOTE: A bugfix release has been made available fixing a few regressions from the original 6.10 release. This is available as 6.10.1.

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

on March 03, 2025 10:54 PM

Welcome to the Ubuntu Weekly Newsletter, Issue 881 for the week of February 23 – March 1, 2025. The full version of this issue is available here.

In this issue we cover:

  • New Canonical CLA Process
  • New Ubuntu Technical Board 2025
  • Ubuntu Stats
  • Hot in Support
  • LXD: Weekly news #384
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • LoCo Events
  • Ubuntu Server Gazette – Issue 1 (Network online target, Network Time Security, Matrix chat)
  • Other Community News
  • Ubuntu Cloud News
  • Canonical News
  • In the Press
  • In the Blogosphere
  • In Other News
  • Other Articles of Interest
  • Featured Audio and Video
  • Updates and Security for Ubuntu 20.04, 22.04, 24.04, and 24.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • Din Mušić – LXD
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on March 03, 2025 10:20 PM

Linux users looking to fine-tune their audio experience often turn to EasyEffects, an advanced audio effects application designed for PipeWire. With EasyEffects, you can apply real-time audio processing to improve sound quality, including equalization, bass boosting, and noise reduction. Whether you’re using speakers or headphones, this tool offers extensive customization to suit your needs.

EasyEffects is the successor to PulseEffects, providing enhanced support for the modern PipeWire audio system, which is becoming the standard on many Linux distributions. PipeWire brings superior performance, lower latency, and better handling of audio and video streams compared to traditional PulseAudio setups.

If you’re looking for high-quality sound customization, installing and using EasyEffects Equalizer Presets is a great way to optimize your system’s audio. Below, we will guide you through installing and using a collection of curated presets for EasyEffects.


EasyEffects Presets Collection

A community-maintained repository provides a set of EasyEffects presets to enhance your listening experience. These presets help to optimize sound quality for different use cases, including bass enhancement, equalization, loudness control, and auto-gain adjustments.

Available Presets:

  • Bass Enhancing + Perfect EQ – Combines Ziyad Nazem’s “Perfect EQ” settings with the Razor surround impulse response for enhanced bass.
  • Perfect EQ – Enables Ziyad Nazem’s “Perfect EQ” without additional effects.
  • Boosted – Uses Ziyad Nazem’s “Boosted” equalizer settings, with an emphasis on lower frequencies.
  • Advanced Auto Gain – Designed for laptop speakers, improving both low and high frequencies while normalizing volume for speech and music.
  • LoudnessEqualizer – Optimized for laptop speakers, ensuring clear vocal audio and preventing sound dimming when bass is played.

Installation Guide

There are two ways to install the EasyEffects presets: automatic installation using a script, or manual installation by copying the configuration files.

Automatic Installation

The easiest way to install the presets is by running the provided installation script:

bash -c "$(curl -fsSL https://raw.githubusercontent.com/JackHack96/PulseEffects-Presets/master/install.sh)"

Note: The script requires curl to be installed. If your system does not have it, install it first:

sudo apt install curl  # For Debian/Ubuntu-based distros

After running the script, restart EasyEffects to apply the new presets.

Manual Installation

If you prefer manual installation, follow these steps:

  1. Clone the preset repository: git clone https://github.com/JackHack96/PulseEffects-Presets.git
  2. Copy the .json preset files to the EasyEffects configuration directory. For Flatpak installations: cp PulseEffects-Presets/*.json ~/.var/app/com.github.wwmm.easyeffects/config/easyeffects/output/ For native installations (via PPA or AUR): cp PulseEffects-Presets/*.json ~/.config/easyeffects/output/
  3. Restart EasyEffects and select the desired preset from the interface.

Conclusion

EasyEffects is a powerful tool that allows Linux users to dramatically improve their audio experience. Whether you want deep bass, clear speech, or balanced sound, these presets provide a great starting point. By installing and experimenting with different configurations, you can find the perfect sound profile for your setup.

Try out these presets and enjoy superior audio on your Linux system!


For more details and updates, visit the official repository.

The post Enhance Your Audio Experience on Linux with EasyEffects Equalizer Presets appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

on March 03, 2025 07:03 AM

If you’re using Meshtastic—a popular open-source, long-range, and low-power communication network—keeping track of your network’s performance and health is crucial. Enter MeshSense: a simple, open-source application designed to monitor, map, and graphically display all the vital stats of your Meshtastic network. Whether you’re managing connected nodes, checking signal reports, or running trace routes, MeshSense offers a comprehensive set of tools to help you stay on top of your network.

What is MeshSense?

MeshSense is a powerful tool that directly connects to your Meshtastic node via Bluetooth or WiFi. Once connected, it continuously provides detailed information about the status and health of your network. With an intuitive interface, you can monitor your network’s performance and quickly identify any issues.

  • Node Monitoring: Track connected nodes, their health, and other essential metrics.
  • Signal Reports: Receive and analyze signal strength, noise levels, and more.
  • Trace Routes: View the routing paths and network topology for a clearer understanding of how your network is operating.

Whether you’re a Meshtastic enthusiast or using it for more serious applications, MeshSense makes it easy to monitor and maintain your network in real-time.

Getting Started with MeshSense

Getting started with MeshSense is straightforward, and it offers various ways to connect and use the application based on your needs.

1. Running MeshSense on Ubuntu

For most users, the easiest way to run MeshSense is with its graphical user interface (GUI). Simply download the latest version of the MeshSense AppImage from the official website and follow these steps:

  1. Download the MeshSense AppImage from here
  2. Install dependency
    sudo apt install libfuse2
  3. Make the AppImage executable:
    chmod +x meshsense-x86_64.AppImage
  4. Run the application:
    ./meshsense-x86_64.AppImage --no-sandbox

2. Headless Usage for Advanced Users

For users who prefer working without a graphical interface, MeshSense offers a headless mode, which allows the application to run in the background or on a server.

To run MeshSense in headless mode, use the --headless flag:

ACCESS_KEY=mySecretKey ./meshsense-x86_64.AppImage --headless

You can specify an access key via the ACCESS_KEY environment variable, which is used for remote connections that require full permissions.

Developing with MeshSense

If you’re interested in contributing to MeshSense or running it from the source code, here’s a quick guide to setting up the development environment.

Clone the Repository

Start by cloning the official MeshSense repository:

git clone --recurse-submodules https://github.com/Affirmatech/MeshSense.git
cd MeshSense

Build the Dependencies

For Debian-based systems (like Ubuntu), you’ll need the following dependencies:

sudo apt install cmake libdbus-1-dev

Then, navigate to the api/webbluetooth directory and install the required npm packages:

cd api/webbluetooth
npm i
npm run build:all
cd ../..

To update the application with the latest code and dependencies, run the update.mjs script:

./update.mjs

Running the UI and API Services

  1. Start the UI Service: Navigate to the ui directory and run:
    cd ui
    PORT=5921 npm run dev
  2. Start the API Service: In a separate terminal, navigate to the api directory and run:
    cd api
    export DEV_UI_URL=http://localhost:5921
    PORT=5920 npm run dev

This will make the front-end of MeshSense accessible through your browser at http://localhost:5920. Be sure to avoid accidentally connecting to the UI service at http://localhost:5921, as this is meant only for development purposes.

Building the Application

To build the UI, API, and Electron components, you can use the build.mjs script. The official Electron builds will be signed with an Affirmatech certificate and placed in api/dist and electron/dist.

Conclusion

MeshSense is a powerful and easy-to-use tool for anyone looking to manage and monitor their Meshtastic network. Whether you’re using it to keep an eye on connected nodes, track signal strength, or visualize network topology, MeshSense makes it all possible in a user-friendly interface. If you’re interested in diving deeper or contributing to its development, MeshSense also offers full support for headless usage and development setups.

For more detailed information or troubleshooting, be sure to check out the official GitHub repository and explore the extensive documentation and FAQs.

Stay connected, and keep your Meshtastic network in top shape with MeshSense!

The post MeshSense: A Comprehensive Tool for Monitoring and Mapping Your Meshtastic Network appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

on March 03, 2025 04:01 AM

March 02, 2025

Most of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

OpenSSH

OpenSSH upstream released 9.9p2 with fixes for CVE-2025-26465 and CVE-2025-26466. I got a heads-up on this in advance from the Debian security team, and prepared updates for all of testing/unstable, bookworm (Debian 12), bullseye (Debian 11), buster (Debian 10, LTS), and stretch (Debian 9, ELTS). jessie (Debian 8) is also still in ELTS for a few more months, but wasn’t affected by either vulnerability.

Although I’m not particularly active in the Perl team, I fixed a libnet-ssleay-perl build failure because it was blocking openssl from migrating to testing, which in turn was blocking the above openssh fixes.

I also sent a minor sshd -T fix upstream, simplified a number of autopkgtests using the newish Restrictions: needs-sudo facility, and prepared for removing the obsolete slogin symlink.

PuTTY

I upgraded to the new upstream version 0.83.

GCC 15 build failures

I fixed build failures with GCC 15 in a few packages:

Python team

A lot of my Python team work is driven by its maintainer dashboard. Now that we’ve finished the transition to Python 3.13 as the default version, and inspired by a recent debian-devel thread started by Santiago, I thought it might be worth spending a bit of time on the “uscan error” section. uscan is typically scraping upstream web sites to figure out whether new versions are available, and so it’s easy for its configuration to become outdated or broken. Most of this work is pretty boring, but it can often reveal situations where we didn’t even realize that a Debian package was out of date. I fixed these packages:

  • cssutils (this in particular was very out of date due to a new and active upstream maintainer since 2021)
  • django-assets
  • django-celery-email
  • django-sass
  • django-yarnpkg
  • json-tricks
  • mercurial-extension-utils
  • pydbus
  • pydispatcher
  • pylint-celery
  • pyspread
  • pytest-pretty
  • python-apptools
  • python-django-libsass (contributed a packaging fix upstream in passing)
  • python-django-postgres-extra
  • python-django-waffle
  • python-ephemeral-port-reserve
  • python-ifaddr
  • python-log-symbols
  • python-msrest
  • python-msrestazure
  • python-netdisco
  • python-pathtools
  • python-user-agents
  • sinntp
  • wchartype

I upgraded these packages to new upstream versions:

  • cssutils (contributed a packaging tweak upstream)
  • django-iconify
  • django-sass
  • domdf-python-tools
  • extra-data (fixing a numpy 2.0 failure)
  • flufl.i18n
  • json-tricks
  • jsonpickle
  • mercurial-extension-utils
  • mod-wsgi
  • nbconvert
  • orderly-set
  • pydispatcher (contributed a Python 3.12 fix upstream)
  • pylint
  • pytest-rerunfailures
  • python-asyncssh
  • python-box (contributed a packaging fix upstream)
  • python-charset-normalizer
  • python-django-constance
  • python-django-guid
  • python-django-pgtrigger
  • python-django-waffle
  • python-djangorestframework-simplejwt
  • python-formencode
  • python-holidays (contributed a test fix upstream)
  • python-legacy-cgi
  • python-marshmallow-polyfield (fixing a test failure)
  • python-model-bakery
  • python-mrcz (fixing a numpy 2.0 failure)
  • python-netdisco
  • python-npe2
  • python-persistent
  • python-pkginfo (fixing a test failure)
  • python-proto-plus
  • python-requests-ntlm
  • python-roman
  • python-semantic-release
  • python-setproctitle
  • python-stdlib-list
  • python-trustme
  • python-typeguard (fixing a test failure)
  • python-tzlocal
  • pyzmq
  • setuptools-scm
  • sqlfluff
  • stravalib
  • tomopy
  • trove-classifiers
  • xhtml2pdf (fixing CVE-2024-25885)
  • xonsh
  • zodbpickle
  • zope.deprecation
  • zope.testrunner

In bookworm-backports, I updated python-django to 3:4.2.18-1 (issuing BSA-121) and added new backports of python-django-dynamic-fixture and python-django-pgtrigger, all of which are dependencies of debusine.

I went through all the build failures related to python-click 8.2.0 (which was confusingly tagged but not fully released upstream and posted an analysis.

I fixed or helped to fix various other build/test failures:

I dropped support for the old setup.py ftest command from zope.testrunner upstream.

I fixed various odds and ends of bugs:

Installer team

Following up on last month, I merged and uploaded Helmut’s /usr-move fix.

on March 02, 2025 01:49 PM

February 27, 2025

E338 Sol & Mar E Nheko Nheko

Podcast Ubuntu Portugal

Depois de uma semana passada a explorar as entranhas de Matrix, passear na floresta com Unav, ver vídeos na Internet com o Freetube e ler as últimas novidades de Ubports, o nosso sossego acabou quando rebentou a polémica: um comunicado sobre o futuro do Ubuntu para os próximos 20 anos! Para ajudar à festa, o Diogo estragou o OBS para as transmissões em vídeo - como? Nem ele sabe, mas foi divertido vê-lo suar para (não) resolver o problema. Mais para o fim, discutimos a melhor maneira de aplicar uma motoserra à bruta em cima de linhas de código e especulámos em primeira mão sobre o país anfitrião da próxima Cimeira do Ubuntu.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on February 27, 2025 12:00 AM

February 24, 2025

Welcome to the Ubuntu Weekly Newsletter, Issue 880 for the week of February 16 – 22, 2025. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu 24.04.2 LTS released
  • Plucky Puffin 25.04 Wallpaper Competition
  • Plucky (to be Plucky Puffin) now in Feature Freeze
  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • LXD: Weekly news #383
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • Git & GitHub Session : Ubuntu Nepal’s Session In SandBox Hackathon Event
  • Ubuntu Africa & XION: Driving Smart Contract Development with Open-Source Solutions
  • LoCo Events
  • Evaluating the new APT solver in 25.04
  • Ubuntu 25.04 mid-cycle roadmap
  • Ubuntu Cloud News
  • Canonical News
  • In the Press
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Updates and Security for Ubuntu 20.04, 22.04, 24.04, and 24.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • Din Mušić – LXD
  • Simon Quigley
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on February 24, 2025 10:51 PM

February 23, 2025

Qalculate time hacks

Colin Watson

Anarcat recently wrote about Qalculate, and I think I’m a convert, even though I’ve only barely scratched the surface.

The thing I almost immediately started using it for is time calculations. When I started tracking my time, I quickly found that Timewarrior was good at keeping all the data I needed, but I often found myself extracting bits of it and reprocessing it in variously clumsy ways. For example, I often don’t finish a task in one sitting; maybe I take breaks, or I switch back and forth between a couple of different tasks. The raw output of timew summary is a bit clumsy for this, as it shows each chunk of time spent as a separate row:

$ timew summary 2025-02-18 Debian

Wk Date       Day Tags                            Start      End    Time   Total
W8 2025-02-18 Tue CVE-2025-26465, Debian,       9:41:44 10:24:17 0:42:33
                  next, openssh
                  Debian, FTBFS with GCC-15,   10:24:17 10:27:12 0:02:55
                  icoutils
                  Debian, FTBFS with GCC-15,   11:50:05 11:57:25 0:07:20
                  kali
                  Debian, Upgrade to 0.67,     11:58:21 12:12:41 0:14:20
                  python_holidays
                  Debian, FTBFS with GCC-15,   12:14:15 12:33:19 0:19:04
                  vigor
                  Debian, FTBFS with GCC-15,   12:39:02 12:39:38 0:00:36
                  python_setproctitle
                  Debian, Upgrade to 1.3.4,    12:39:39 12:46:05 0:06:26
                  python_setproctitle
                  Debian, FTBFS with GCC-15,   12:48:28 12:49:42 0:01:14
                  python_setproctitle
                  Debian, Upgrade to 3.4.1,    12:52:07 13:02:27 0:10:20 1:44:48
                  python_charset_normalizer

                                                                         1:44:48

So I wrote this Python program to help me:

#! /usr/bin/python3

"""
Summarize timewarrior data, grouped and sorted by time spent.
"""

import json
import subprocess
from argparse import ArgumentParser, RawDescriptionHelpFormatter
from collections import defaultdict
from datetime import datetime, timedelta, timezone
from operator import itemgetter

from rich import box, print
from rich.table import Table


parser = ArgumentParser(
    description=__doc__, formatter_class=RawDescriptionHelpFormatter
)
parser.add_argument("-t", "--only-total", default=False, action="store_true")
parser.add_argument(
    "range",
    nargs="?",
    default=":today",
    help="Time range (usually a hint, e.g. :lastweek)",
)
parser.add_argument("tag", nargs="*", help="Tags to filter by")
args = parser.parse_args()

entries: defaultdict[str, timedelta] = defaultdict(timedelta)
now = datetime.now(timezone.utc)
for entry in json.loads(
    subprocess.run(
        ["timew", "export", args.range, *args.tag],
        check=True,
        capture_output=True,
        text=True,
    ).stdout
):
    start = datetime.fromisoformat(entry["start"])
    if "end" in entry:
        end = datetime.fromisoformat(entry["end"])
    else:
        end = now
    entries[", ".join(entry["tags"])] += end - start

if not args.only_total:
    table = Table(box=box.SIMPLE, highlight=True)
    table.add_column("Tags")
    table.add_column("Time", justify="right")
    for tags, time in sorted(entries.items(), key=itemgetter(1), reverse=True):
        table.add_row(tags, str(time))
    print(table)

total = sum(entries.values(), start=timedelta())
hours, rest = divmod(total, timedelta(hours=1))
minutes, rest = divmod(rest, timedelta(minutes=1))
seconds = rest.seconds
print(f"Total time: {hours:02}:{minutes:02}:{seconds:02}")
$ summarize-time 2025-02-18 Debian

  Tags                                                     Time
 ───────────────────────────────────────────────────────────────
  CVE-2025-26465, Debian, next, openssh                 0:42:33
  Debian, FTBFS with GCC-15, vigor                      0:19:04
  Debian, Upgrade to 0.67, python_holidays              0:14:20
  Debian, Upgrade to 3.4.1, python_charset_normalizer   0:10:20
  Debian, FTBFS with GCC-15, kali                       0:07:20
  Debian, Upgrade to 1.3.4, python_setproctitle         0:06:26
  Debian, FTBFS with GCC-15, icoutils                   0:02:55
  Debian, FTBFS with GCC-15, python_setproctitle        0:01:50

Total time: 01:44:48

Much nicer. But that only helps with some of my reporting. At the end of a month, I have to work out how much time to bill Freexian for and fill out a timesheet, and for various reasons those queries don’t correspond to single timew tags: they sometimes correspond to the sum of all time spent on multiple tags, or to the time spent on one tag minus the time spent on another tag, or similar. As a result I quite often have to do basic arithmetic on time intervals; but that’s surprisingly annoying! I didn’t previously have good tools for that, and was reduced to doing things like str(timedelta(hours=..., minutes=..., seconds=...) + ...) in Python, which gets old fast.

Instead:

$ qalc '62:46:30 - 51:02:42 to time'
(225990 / 3600) − (183762 / 3600) = 11:43:48

I also often want to work out how much of my time I’ve spent on Debian work this month so far, since Freexian pays me for up to 20% of my work time on Debian; if I’m under that then I might want to prioritize more Debian projects, and if I’m over then I should be prioritizing more Freexian projects as otherwise I’m not going to get paid for that time.

$ summarize-time -t :month Freexian
Total time: 69:19:42
$ summarize-time -t :month Debian
Total time: 24:05:30
$ qalc '24:05:30 / (24:05:30 + 69:19:42) to %'
(86730 / 3600) / ((86730 / 3600) + (249582 / 3600)) ≈ 25.78855349%

I love it.

on February 23, 2025 08:00 PM

February 21, 2025

The Open Source Initiative has two classes of board seats: Affiliate seats, and Individual Member seats. 

In the upcoming election, each affiliate can nominate a candidate, and each affiliate can cast a vote for the Affiliate candidates, but there's only 1 Affiliate seat available. I initially expressed interest in being nominated as an Affiliate candidate via Debian. But since Bradley Kuhn is also running for an Affiliate seat with a similar platform to me, especially with regards to the OSAID, I decided to run as part of an aligned "ticket" as an Individual Member to avoid contention for the 1 Affiliate seat.

Bradley and I discussed running on a similar ticket around 8/9pm Pacific, and I submitted my candidacy around 9pm PT on 17 February. 

I was dismayed when I received the following mail from Nick Vidal:

Dear Luke,

Thank you for your interest in the OSI Board of Directors election. Unfortunately, we are unable to accept your application as it was submitted after the official deadline of Monday Feb 17 at 11:59 pm UTC. To ensure a fair process, we must adhere to the deadline for all candidates.

We appreciate your enthusiasm and encourage you to stay engaged with OSI’s mission. We hope you’ll consider applying in the future or contributing in other meaningful ways.

Best regards,
OSI Election Teams

Nowhere on the "OSI’s board of directors in 2025: details about the elections" page do they list a timezone for closure of nominations; they simply list Monday 17 February. 

The OSI's contact address is in California, so it seems arbitrary and capricious to retroactively define all of these processes as being governed by UTC.

I was not able to participate in the "potential board director" info sessions accordingly, but people who attended heard that the importance of accommodating differing TZ's was discussed during the info session, and that OSI representatives mentioned they try to accommodate TZ's of everyone. This seems in sharp contrast with the above policy. 

I urge the OSI to reconsider this policy and allow me to stand for an Individual seat in the current cycle. 

on February 21, 2025 10:35 AM

February 20, 2025

The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 24.04.2 LTS. This is a minor release which wraps-up the security and bug fixes into one .iso image, available for download now.

Among the changes, we have updated the support and help links in the menu, fixed bugs in Ubuntu Studio Installer, and more. As always, check the Ubuntu Studio 24.04 LTS Release Notes release notes for more information.

Please give financially to Ubuntu Studio!

Giving is down. We understand that some people may no longer be able to give financially to this project, and that’s OK. However, if you have never given to Ubuntu Studio for the hard work and dedication we put into this project, please consider a monetary contribution.

Additionally, we would love to see more monthly contributions to this project. You can do so via PayPal, Liberapay, or Patreon. We would love to see more contributions!

So don’t wait, and don’t wait for someone else to do it! Thank you in advance!

Donate using PayPal
Donations are Monthly or One-Time
Donate using Liberapay
Donate using Liberapay
Donations are
Weekly, Monthly, or Annually
Donate using Patreon
Become a Patron!Donations are
Monthly

on February 20, 2025 06:45 PM

boot2kier

Paul Tagliamonte

I can’t remember exactly the joke I was making at the time in my work’s slack instance (I’m sure it wasn’t particularly funny, though; and not even worth re-reading the thread to work out), but it wound up with me writing a UEFI binary for the punchline. Not to spoil the ending but it worked - no pesky kernel, no messing around with “userland”. I guess the only part of this you really need to know for the setup here is that it was a Severance joke, which is some fantastic TV. If you haven’t seen it, this post will seem perhaps weirder than it actually is. I promise I haven’t joined any new cults. For those who have seen it, the payoff to my joke is that I wanted my machine to boot directly to an image of Kier Eagan.

As for how to do it – I figured I’d give the uefi crate a shot, and see how it is to use, since this is a low stakes way of trying it out. In general, this isn’t the sort of thing I’d usually post about – except this wound up being easier and way cleaner than I thought it would be. That alone is worth sharing, in the hopes someome comes across this in the future and feels like they, too, can write something fun targeting the UEFI.

First thing’s first – gotta create a rust project (I’ll leave that part to you depending on your life choices), and to add the uefi crate to your Cargo.toml. You can either use cargo add or add a line like this by hand:

uefi = { version = "0.33", features = ["panic_handler", "alloc", "global_allocator"] }

We also need to teach cargo about how to go about building for the UEFI target, so we need to create a rust-toolchain.toml with one (or both) of the UEFI targets we’re interested in:

[toolchain]
targets = ["aarch64-unknown-uefi", "x86_64-unknown-uefi"]

Unfortunately, I wasn’t able to use the image crate, since it won’t build against the uefi target. This looks like it’s because rustc had no way to compile the required floating point operations within the image crate without hardware floating point instructions specifically. Rust tends to punt a lot of that to libm usually, so this isnt entirely shocking given we’re nostd for a non-hardfloat target.

So-called “softening” requires a software floating point implementation that the compiler can use to “polyfill” (feels weird to use the term polyfill here, but I guess it’s spiritually right?) the lack of hardware floating point operations, which rust hasn’t implemented for this target yet. As a result, I changed tactics, and figured I’d use ImageMagick to pre-compute the pixels from a jpg, rather than doing it at runtime. A bit of a bummer, since I need to do more out of band pre-processing and hardcoding, and updating the image kinda sucks as a result – but it’s entirely manageable.

$ convert -resize 1280x900 kier.jpg kier.full.jpg
$ convert -depth 8 kier.full.jpg rgba:kier.bin

This will take our input file (kier.jpg), resize it to get as close to the desired resolution as possible while maintaining aspect ration, then convert it from a jpg to a flat array of 4 byte RGBA pixels. Critically, it’s also important to remember that the size of the kier.full.jpg file may not actually be the requested size – it will not change the aspect ratio, so be sure to make a careful note of the resulting size of the kier.full.jpg file.

Last step with the image is to compile it into our Rust bianary, since we don’t want to struggle with trying to read this off disk, which is thankfully real easy to do.

const KIER: &[u8] = include_bytes!("../kier.bin");
const KIER_WIDTH: usize = 1280;
const KIER_HEIGHT: usize = 641;
const KIER_PIXEL_SIZE: usize = 4;

Remember to use the width and height from the final kier.full.jpg file as the values for KIER_WIDTH and KIER_HEIGHT. KIER_PIXEL_SIZE is 4, since we have 4 byte wide values for each pixel as a result of our conversion step into RGBA. We’ll only use RGB, and if we ever drop the alpha channel, we can drop that down to 3. I don’t entirely know why I kept alpha around, but I figured it was fine. My kier.full.jpg image winds up shorter than the requested height (which is also qemu’s default resolution for me) – which means we’ll get a semi-annoying black band under the image when we go to run it – but it’ll work.

Anyway, now that we have our image as bytes, we can get down to work, and write the rest of the code to handle moving bytes around from in-memory as a flat block if pixels, and request that they be displayed using the UEFI GOP. We’ll just need to hack up a container for the image pixels and teach it how to blit to the display.

/// RGB Image to move around. This isn't the same as an
/// `image::RgbImage`, but we can associate the size of
/// the image along with the flat buffer of pixels.
struct RgbImage {
/// Size of the image as a tuple, as the
 /// (width, height)
 size: (usize, usize),
/// raw pixels we'll send to the display.
 inner: Vec<BltPixel>,
}
impl RgbImage {
/// Create a new `RgbImage`.
 fn new(width: usize, height: usize) -> Self {
RgbImage {
size: (width, height),
inner: vec![BltPixel::new(0, 0, 0); width * height],
}
}
/// Take our pixels and request that the UEFI GOP
 /// display them for us.
 fn write(&self, gop: &mut GraphicsOutput) -> Result {
gop.blt(BltOp::BufferToVideo {
buffer: &self.inner,
src: BltRegion::Full,
dest: (0, 0),
dims: self.size,
})
}
}
impl Index<(usize, usize)> for RgbImage {
type Output = BltPixel;
fn index(&self, idx: (usize, usize)) -> &BltPixel {
let (x, y) = idx;
&self.inner[y * self.size.0 + x]
}
}
impl IndexMut<(usize, usize)> for RgbImage {
fn index_mut(&mut self, idx: (usize, usize)) -> &mut BltPixel {
let (x, y) = idx;
&mut self.inner[y * self.size.0 + x]
}
}

We also need to do some basic setup to get a handle to the UEFI GOP via the UEFI crate (using uefi::boot::get_handle_for_protocol and uefi::boot::open_protocol_exclusive for the GraphicsOutput protocol), so that we have the object we need to pass to RgbImage in order for it to write the pixels to the display. The only trick here is that the display on the booted system can really be any resolution – so we need to do some capping to ensure that we don’t write more pixels than the display can handle. Writing fewer than the display’s maximum seems fine, though.

fn praise() -> Result {
let gop_handle = boot::get_handle_for_protocol::<GraphicsOutput>()?;
let mut gop = boot::open_protocol_exclusive::<GraphicsOutput>(gop_handle)?;
// Get the (width, height) that is the minimum of
 // our image and the display we're using.
 let (width, height) = gop.current_mode_info().resolution();
let (width, height) = (width.min(KIER_WIDTH), height.min(KIER_HEIGHT));
let mut buffer = RgbImage::new(width, height);
for y in 0..height {
for x in 0..width {
let idx_r = ((y * KIER_WIDTH) + x) * KIER_PIXEL_SIZE;
let pixel = &mut buffer[(x, y)];
pixel.red = KIER[idx_r];
pixel.green = KIER[idx_r + 1];
pixel.blue = KIER[idx_r + 2];
}
}
buffer.write(&mut gop)?;
Ok(())
}

Not so bad! A bit tedious – we could solve some of this by turning KIER into an RgbImage at compile-time using some clever Cow and const tricks and implement blitting a sub-image of the image – but this will do for now. This is a joke, after all, let’s not go nuts. All that’s left with our code is for us to write our main function and try and boot the thing!

#[entry]
fn main() -> Status {
uefi::helpers::init().unwrap();
praise().unwrap();
boot::stall(100_000_000);
Status::SUCCESS
}

If you’re following along at home and so interested, the final source is over at gist.github.com. We can go ahead and build it using cargo (as is our tradition) by targeting the UEFI platform.

$ cargo build --release --target x86_64-unknown-uefi

Testing the UEFI Blob

While I can definitely get my machine to boot these blobs to test, I figured I’d save myself some time by using QEMU to test without a full boot. If you’ve not done this sort of thing before, we’ll need two packages, qemu and ovmf. It’s a bit different than most invocations of qemu you may see out there – so I figured it’d be worth writing this down, too.

$ doas apt install qemu-system-x86 ovmf

qemu has a nice feature where it’ll create us an EFI partition as a drive and attach it to the VM off a local directory – so let’s construct an EFI partition file structure, and drop our binary into the conventional location. If you haven’t done this before, and are only interested in running this in a VM, don’t worry too much about it, a lot of it is convention and this layout should work for you.

$ mkdir -p esp/efi/boot
$ cp target/x86_64-unknown-uefi/release/*.efi \
 esp/efi/boot/bootx64.efi

With all this in place, we can kick off qemu, booting it in UEFI mode using the ovmf firmware, attaching our EFI partition directory as a drive to our VM to boot off of.

$ qemu-system-x86_64 \
 -enable-kvm \
 -m 2048 \
 -smbios type=0,uefi=on \
 -bios /usr/share/ovmf/OVMF.fd \
 -drive format=raw,file=fat:rw:esp

If all goes well, soon you’ll be met with the all knowing gaze of Chosen One, Kier Eagan. The thing that really impressed me about all this is this program worked first try – it all went so boringly normal. Truly, kudos to the uefi crate maintainers, it’s incredibly well done.

Booting a live system

Sure, we could stop here, but anyone can open up an app window and see a picture of Kier Eagan, so I knew I needed to finish the job and boot a real machine up with this. In order to do that, we need to format a USB stick. BE SURE /dev/sda IS CORRECT IF YOU’RE COPY AND PASTING. All my drives are NVMe, so BE CAREFUL – if you use SATA, it may very well be your hard drive! Please do not destroy your computer over this.

$ doas fdisk /dev/sda
Welcome to fdisk (util-linux 2.40.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4014079, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-4014079, default 4014079):
Created a new partition 1 of type 'Linux' and of size 1.9 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): ef
Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Once that looks good (depending on your flavor of udev you may or may not need to unplug and replug your USB stick), we can go ahead and format our new EFI partition (BE CAREFUL THAT /dev/sda IS YOUR USB STICK) and write our EFI directory to it.

$ doas mkfs.fat /dev/sda1
$ doas mount /dev/sda1 /mnt
$ cp -r esp/efi /mnt
$ find /mnt
/mnt
/mnt/efi
/mnt/efi/boot
/mnt/efi/boot/bootx64.efi

Of course, naturally, devotion to Kier shouldn’t mean backdooring your system. Disabling Secure Boot runs counter to the Core Principals, such as Probity, and not doing this would surely run counter to Verve, Wit and Vision. This bit does require that you’ve taken the step to enroll a MOK and know how to use it, right about now is when we can use sbsign to sign our UEFI binary we want to boot from to continue enforcing Secure Boot. The details for how this command should be run specifically is likely something you’ll need to work out depending on how you’ve decided to manage your MOK.

$ doas sbsign \
 --cert /path/to/mok.crt \
 --key /path/to/mok.key \
 target/x86_64-unknown-uefi/release/*.efi \
 --output esp/efi/boot/bootx64.efi

I figured I’d leave a signed copy of boot2kier at /boot/efi/EFI/BOOT/KIER.efi on my Dell XPS 13, with Secure Boot enabled and enforcing, just took a matter of going into my BIOS to add the right boot option, which was no sweat. I’m sure there is a way to do it using efibootmgr, but I wasn’t smart enough to do that quickly. I let ‘er rip, and it booted up and worked great!

It was a bit hard to get a video of my laptop, though – but lucky for me, I have a Minisforum Z83-F sitting around (which, until a few weeks ago was running the annual http server to control my christmas tree ) – so I grabbed it out of the christmas bin, wired it up to a video capture card I have sitting around, and figured I’d grab a video of me booting a physical device off the boot2kier USB stick.

Attentive readers will notice the image of Kier is smaller then the qemu booted system – which just means our real machine has a larger GOP display resolution than qemu, which makes sense! We could write some fancy resize code (sounds annoying), center the image (can’t be assed but should be the easy way out here) or resize the original image (pretty hardware specific workaround). Additionally, you can make out the image being written to the display before us (the Minisforum logo) behind Kier, which is really cool stuff. If we were real fancy we could write blank pixels to the display before blitting Kier, but, again, I don’t think I care to do that much work.

But now I must away

If I wanted to keep this joke going, I’d likely try and find a copy of the original video when Helly 100%s her file and boot into that – or maybe play a terrible midi PC speaker rendition of Kier, Chosen One, Kier after rendering the image. I, unfortunately, don’t have any friends involved with production (yet?), so I reckon all that’s out for now. I’ll likely stop playing with this – the joke was done and I’m only writing this post because of how great everything was along the way.

All in all, this reminds me so much of building a homebrew kernel to boot a system into – but like, good, though, and it’s a nice reminder of both how fun this stuff can be, and how far we’ve come. UEFI protocols are light-years better than how we did it in the dark ages, and the tooling for this is SO much more mature. Booting a custom UEFI binary is miles ahead of trying to boot your own kernel, and I can’t believe how good the uefi crate is specifically.

Praise Kier! Kudos, to everyone involved in making this so delightful ❤️.

on February 20, 2025 02:40 PM

E337 Chapeleiros De Al13

Podcast Ubuntu Portugal

ELES ANDEM AÍ! Depois de uma semana de grandes abalos, ouçam este podcast censurado e aprendam tudo o que há para saber sobre discos encriptados, comunicações seguras, alternativas às grandes tecnológicas e até…livros! Como sobreviver num mundo digital virado do avesso? Falámos de comunicações(in)seguras, alternativas às tecnológicas norte-americanas, paranóias várias e como o software livre nos pode ajudar a sobreviver na clandestinidade e ainda, intelijumência artificial, experiências com emissões no Twitch e a nobre causa de compilar aplicações do KDE para Snap.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on February 20, 2025 12:00 AM

February 19, 2025

All core22 KDE snaps are broken. There is not an easy fix. We have used kde-neon repos since inception and haven’t had issues until now.

libEGL fatal: DRI driver not from this Mesa build (‘23.2.1-1ubuntu3.1~22.04.3’ vs ‘23.2.1-1ubuntu3.1~22.04.2’)

Apparently Jammy had a mesa update?

Option 1: Rebuild our entire stack without neon repos ( fails due to dependencies not in Jammy, would require tracking down all of these and build from source )

Option 2: Finish the transition to core24 ( This is an enormous task and will take some time still )

Either option will take more time and effort than I have. I need to be job hunting as I have run out of resources to pay my bills. My internet/phone will be cut off in days. I am beyond stressed out and getting snippy with folks, for that I apologize. If someone wants to sponsor the above work then please donate to https://gofund.me/fe30793b otherwise I am stepping away to rethink life and my defunct career.

I am truly sorry everyone.

New core24 Snaps:

Arianna – Epub viewer

k3b – Disc burner

Snapcraft:

Fixes for the qt5 kde-neon extension

https://github.com/canonical/snapcraft/pull/5261

on February 19, 2025 02:17 PM

February 18, 2025

Wireshark is an essential tool for network analysis, and staying up to date with the latest releases ensures access to new features, security updates, and bug fixes. While Ubuntu’s official repositories provide stable versions, they are often not the most recent.

Wearing both WiresharkCore Developer and Debian/Ubuntu package maintainer hats, I’m happy to help the Wireshark team in providing updated packages for all supported Ubuntu versions through dedicated PPAs. This post outlines how you can install the latest stable and nightly Wireshark builds on Ubuntu.

Latest Stable Releases

For users who want the most up-to-date stable Wireshark version, we maintain a PPA with backports of the latest releases:

🔗 Stable Wireshark PPA:
👉 https://launchpad.net/~wireshark-dev/+archive/ubuntu/stable

Installation Instructions

To install the latest stable Wireshark version, add the PPA and update your package list:

sudo add-apt-repository ppa:wireshark-dev/stable
sudo apt install wireshark

Nightly Builds (Development Versions)

For those who want to test new features before they are officially released, nightly builds are also available. These builds track the latest development code and you can watch them cooking on their Launchpad recipe page.

🔗 Nightly PPA:
👉 https://code.launchpad.net/~wireshark-dev/+archive/ubuntu/nightly

Installation Instructions

To install the latest development version of Wireshark, use the following commands:

sudo add-apt-repository ppa:wireshark-dev/nightly
sudo apt install wireshark

Note: Nightly builds may contain experimental features and are not guaranteed to be as stable as the official releases. Also it targets only Ubuntu 24.04 and later including the current development release.

If you need to revert to the stable version later, remove the nightly PPA and reinstall Wireshark:

sudo add-apt-repository --remove ppa:wireshark-dev/nightly
sudo apt install wireshark

Happy sniffing! 🙂

on February 18, 2025 09:57 AM

February 13, 2025

tl;dr I’m hosting a Community Spotlight Webinar today at Anchore featuring Nicolas Vuilamy from the MegaLinter project. Register here.


Throughout my career, I’ve had the privilege of working with organizations that create widely-used open source tools. The popularity of these tools is evident through their impressive download statistics, strong community presence, and engagement both online and at events.

During my time at Canonical, we saw the tremendous reach of Ubuntu, along with tools like LXD, cloud-init, and yes, even Snapcraft.

At Influxdata, I was part of the Telegraf team, where we witnessed substantial adoption through downloads and active usage, reflected in our vibrant bug tracker.

Now at Anchore, we see widespread adoption of Syft for SBOM generation and Grype for vulnerability scanning.

What makes Syft and Grype particularly exciting, beyond their permissive licensing, consistent release cycle, dedicated developer team, and distinctive mascots, is how they serve as building blocks for other tools and services.

Syft isn’t just a standalone SBOM generator - it’s a library that developers can integrate into their own tools. Some organizations even build their own SBOM generators and vulnerability tools directly from our open source foundation!

$ docker-scout version
 ⢀⢀⢀ ⣀⣀⡤⣔⢖⣖⢽⢝
 ⡠⡢⡣⡣⡣⡣⡣⡣⡢⡀ ⢀⣠⢴⡲⣫⡺⣜⢞⢮⡳⡵⡹⡅
 ⡜⡜⡜⡜⡜⡜⠜⠈⠈ ⠁⠙⠮⣺⡪⡯⣺⡪⡯⣺
 ⢘⢜⢜⢜⢜⠜ ⠈⠪⡳⡵⣹⡪⠇
 ⠨⡪⡪⡪⠂ ⢀⡤⣖⢽⡹⣝⡝⣖⢤⡀ ⠘⢝⢮⡚ _____ _
 ⠱⡱⠁ ⡴⡫⣞⢮⡳⣝⢮⡺⣪⡳⣝⢦ ⠘⡵⠁ / ____| Docker | |
 ⠁ ⣸⢝⣕⢗⡵⣝⢮⡳⣝⢮⡺⣪⡳⣣ ⠁ | (___ ___ ___ _ _| |_
 ⣗⣝⢮⡳⣝⢮⡳⣝⢮⡳⣝⢮⢮⡳ \___ \ / __/ _ \| | | | __|
 ⢀ ⢱⡳⡵⣹⡪⡳⣝⢮⡳⣝⢮⡳⡣⡏ ⡀ ____) | (_| (_) | |_| | |_
 ⢀⢾⠄ ⠫⣞⢮⡺⣝⢮⡳⣝⢮⡳⣝⠝ ⢠⢣⢂ |_____/ \___\___/ \__,_|\__|
 ⡼⣕⢗⡄ ⠈⠓⠝⢮⡳⣝⠮⠳⠙ ⢠⢢⢣⢣
 ⢰⡫⡮⡳⣝⢦⡀ ⢀⢔⢕⢕⢕⢕⠅
 ⡯⣎⢯⡺⣪⡳⣝⢖⣄⣀ ⡀⡠⡢⡣⡣⡣⡣⡣⡃
⢸⢝⢮⡳⣝⢮⡺⣪⡳⠕⠗⠉⠁ ⠘⠜⡜⡜⡜⡜⡜⡜⠜⠈
⡯⡳⠳⠝⠊⠓⠉ ⠈⠈⠈⠈



version: v1.13.0 (go1.22.5 - darwin/arm64)
git commit: 7a85bab58d5c36a7ab08cd11ff574717f5de3ec2

$ syft /usr/local/bin/docker-scout | grep syft
 ✔ Indexed file system /usr/local/bin/docker-scout
 ✔ Cataloged contents f247ef0423f53cbf5172c34d2b3ef23d84393bd1d8e05f0ac83ec7d864396c1b
 ├── ✔ Packages [274 packages]
 ├── ✔ File digests [1 files]
 ├── ✔ File metadata [1 locations]
 └── ✔ Executables [1 executables]
github.com/anchore/syft v1.10.0 go-module

(I find it delightfully meta to discover syft inside other tools using syft itself)

A silly meme that isn't true at all :)

This collaborative building upon existing tools mirrors how Linux distributions often build upon other Linux distributions. Like Ubuntu and Telegraf, we see countless individuals and organizations creating innovative solutions that extend beyond the core capabilities of Syft and Grype. It’s the essence of open source - a multiplier effect that comes from creating accessible, powerful tools.

While we may not always know exactly how and where these tools are being used (and sometimes, rightfully so, it’s not our business), there are many cases where developers and companies want to share their innovative implementations.

I’m particularly interested in these stories because they deserve to be shared. I’ve been exploring public repositories like the GitHub network dependents for syft, grype, sbom-action, and scan-action to discover where our tools are making an impact.

The adoption has been remarkable!

I reached out to several open source projects to learn about their implementations, and Nicolas Vuilamy from MegaLinter was the first to respond - which brings us full circle.

Today, I’m hosting our first Community Spotlight Webinar with Nicolas to share MegaLinter’s story. Register here to join us!

If you’re building something interesting with Anchore Open Source and would like to share your story, please get in touch. 🙏

on February 13, 2025 10:00 AM

February 11, 2025

APT eatmydata super cow powers

Tired of waiting for apt to finish installing packages? Wish there were a way to make your installations blazingly fast without caring about minor things like, oh, data integrity? Well, today is your lucky day! 🎉

I’m thrilled to introduce apt-eatmydata, now available for Debian and all supported Ubuntu releases!

What Is apt-eatmydata?

If you’ve ever used libeatmydata, you know it’s a nifty little hack that disables fsync() and friends, making package installations way faster by skipping unnecessary disk writes. Normally, you’d have to remember to wrap apt commands manually, like this:

eatmydata apt install texlive-full

But who has time for that? apt-eatmydata takes care of this automagically by integrating eatmydata seamlessly into apt itself! That means every package install is now turbocharged—no extra typing required. 🚀

How to Get It

Debian

If you’re on Debian unstable/testing (or possibly soon in stable-backports), you can install it directly with:

sudo apt install apt-eatmydata

Ubuntu

Ubuntu users already enjoy faster package installation thanks to zstd-compressed packages and to switch to even higher gear I’ve backported apt-eatmydata to all supported Ubuntu releases. Just add this PPA and install:

sudo add-apt-repository ppa:firebuild/apt-eatmydata
sudo apt install apt-eatmydata

And boom! Your apt install times are getting serious upgrade. Let’s run some tests…

# pre-download package to measure only the installation
$ sudo apt install -d linux-headers-6.8.0-53-lowlatency
...
# installation time is 9.35s without apt-eatmydata:
$ sudo time apt install linux-headers-6.8.0-53-lowlatency
...
2.30user 2.12system 0:09.35elapsed 47%CPU (0avgtext+0avgdata 174680maxresident)k
32inputs+1495216outputs (0major+196945minor)pagefaults 0swaps
$ sudo apt install apt-eatmydata
...
$ sudo apt purge linux-headers-6.8.0-53-lowlatency
# installation time is 3.17s with apt-eatmydata:
$ sudo time eatmydata apt install linux-headers-6.8.0-53-lowlatency
2.30user 0.88system 0:03.17elapsed 100%CPU (0avgtext+0avgdata 174692maxresident)k
0inputs+205664outputs (0major+198099minor)pagefaults 0swaps

apt-eatmydata just made installing Linux headers 3x faster!

But Wait, There’s More! 🎁

If you’re automating CI builds, there’s even a GitHub Action to make your workflows faster essentially doing what apt-eatmydata does, just setting it up in less than a second! Check it out here:
👉 GitHub Marketplace: apt-eatmydata

Should You Use It?

🚨 Warning: apt-eatmydata is not for all production environments. If your system crashes mid-install, you might end up with a broken package database. But for throwaway VMs, containers, and CI pipelines? It’s an absolute game-changer. I use it on my laptop, too.

So go forth and install recklessly fast! 🚀

If you run into any issues, feel free to file a bug or drop a comment. Happy hacking!

(To accelerate your CI pipeline or local builds, check out Firebuild, that speeds up the builds, too!)

on February 11, 2025 05:04 PM

February 08, 2025

Use RSS to read newsletters

Stuart Langridge

Everyone's got a newsletter these days (like everyone's got a podcast). In general, I think this is OK: instead of going through a middleman publisher, have a direct connection from you to the people who want to read what you say, so that that audience can't be taken away from you.

On the other hand, I don't actually like newsletters. I don't really like giving my email address to random people1, and frankly an email app is not a great way to read long-form text! There are many apps which are a lot better at this.

There is a solution to this and the solution is called RSS. Andy Bell explains RSS and this is exactly how I read newsletters. If I want to read someone's newsletter and it's on Substack, or ghost.io, or buttondown.email, what I actually do is subscribe to their newsletter but what I'm actually subscribing to is their RSS feed. This sections off newsletter stuff into a completely separate app that I can catch up on when I've got the time, it means that the newsletter owner (or the site they're using) can't decide to "upsell" me on other stuff they do that I'm not interested in, and it's a better, nicer reading experience than my mail app.2

I use NetNewsWire on my iOS phone, but there are a bunch of other newsreader apps for every platform and you should choose whichever one you want. Andy lists a bunch, above.

The question, of course, then becomes: how do you find the RSS feed for a thing you want to read?3 Well, it turns out... you don't have to.

When you want to subscribe to a newsletter, you literally just put the web address of the newsletter itself into your RSS reader, and that reader will take care of finding the feed and subscribing to it, for you. It's magic. Hooray! I've tested this with substack, with ghost.io, with buttondown.email, and it works with all of them. You don't need to do anything.

If that doesn't work, then there is one neat alternative you can try, though. Kill The Newsletter will give you an email address for any site you name, and provide the incoming emails to that as an RSS feed. So, if you've found a newsletter which doesn't exist on the web (boo hiss!) and doesn't provide an RSS feed, then you go to KTN, it gives you some randomly-generated email address, you subscribe to the intransigent newsletter with that email address, and then you can subscribe to the resultant feed in your RSS reader. It's dead handy.

If you run a newsletter and it doesn't have an RSS feed and you want it to have, then have a look at whatever newsletter software you use; it will almost certainly provide a way to create one, and you might have to tick a box. (You might also want to complain to the software creators that that box wasn't ticked by default.) If you've got an RSS feed for the newsletter that you write, but putting your site's address into an RSS reader doesn't find that RSS feed, then what you need is RSS autodiscovery, which is the "magic" alluded to above; you add a line to your site's HTML in the <head> section which reads <link rel="alternate" type="application/rss+xml" title="RSS" href="https://URL/of/your/feed"> and then it'll work.

I like this. Read newsletters at my pace, in my choice of app, on my terms. More of that sort of thing.

  1. despite how it's my business to do so and it's right there on the front page of the website, I know, I know
  2. Is all of this doable in my mail client? Sure. I could set up filters, put newsletters into their own folders/labels, etc. But that's working around a problem rather than solving it
  3. I suggested to Andy that he ought to write this post explaining how to do this and then realised that I should do it myself and stop being such a lazy snipe, so here it is
on February 08, 2025 03:09 PM

February 04, 2025

Lubuntu Plucky Puffin is the current development branch of Lubuntu, which will become 25.04. Since the release of 24.10, we have been hard at work polishing the experience and fixing bugs in the upcoming release. Below, we detail some of the changes you can look forward to in 25.04. Two Minute Minimal Install When installing […]
on February 04, 2025 08:32 PM

Following a bug in ubuntu-release-upgrader which was causing Ubuntu Studio 22.04 LTS to fail to upgrade to 24.04 LTS, we are pleased to announce that this bug has been fixed, and upgrades now work.

As of this writing, this update is being propagated to the various Ubuntu mirrors throughout the world. The version of ubuntu-release-upgrader needed is 24.04.26 or higher, and is automatically pulled from the 24.04 repositories upon upgrade.

Unfortunately, while testing this fix, we noticed that, due to the time_t64 transition which prevents the 2038 problem, some packages get removed. We have noticed that, if upgrading from 22.04 LTS to 24.04 LTS, the following applications get removed (this list is not exhaustive):

  • Blender
  • Kdenlive
  • digiKam
  • GIMP
  • Krita (doesn’t get upgraded)

To fix this, immediately after upgrade, open a Konsole terminal (ctrl-alt-t) and enter the following:

sudo apt -y remove ubuntstudio-graphics ubuntustudio-video ubuntustudio-photography && sudo apt -y install ubuntustudio-graphics ubuntustudio-video ubuntustudio-photography && sudo apt upgrade

If you do intend to upgrade, remember to purge any PPAs you may have enabled via ppa-purge so that your upgrade will go as smooth as possible.

We apologize for the inconvenience that may have been caused by this bug, and we hope your upgrade process goes as smooth as possible. There may be edge cases where this goes badly as we cannot account for every installation and whatever third-party repositories may be enabled, in which case the best method is to back-up your /home directory and do a clean installation.

Remember to upgrade soon, as Ubuntu Studio 22.04 goes End Of Life (EOL) in April!

on February 04, 2025 08:01 PM

February 03, 2025

Blog Questions Challenge

Stuart Langridge

The latest thing circulating around people still blogging is the Blog Questions Challenge; Jon did it (and asked if I was) and so have Jeremy and Ethan and a bunch of others, so clearly it is time I should get on board, fractionally late as ever.1

Why did you start blogging in the first place?

Some other people I admired were doing it. I think the person I was most influenced by to start doing it was Simon Willison, who is also still at it2, but a whole bunch of people got on board at around that same time, back in the early days when you be a medium-sized fish in a small pool just by participating. Mark Pilgrim springs to mind as well -- that's a good example of having influence, when the "standard format" of permalinks got sort of hashed out collectively to be /2025/02/03/blog-questions-challenge, which a lot of places still adhere to (although it feels faintly quaint, these days).

Interestingly, a lot of the early posts on this site are short two-sentence half-paragraph things, throwaway thoughts, and that all got sucked up by social media... but social media hadn't been invented, back in 2002.

Also interestingly: the second post on this here blog3 was bitching at Mozilla about the Firefox release schedule. Nothing new under the sun.4

What platform are you using to manage your blog and why did you choose it? Have you blogged on other platforms before?

Cor. When it started, this site was being run by Castalian, which was basically "classic ASP but Python instead of VBScript", a thing I built. This is because I was using ASP at work on Windows machines, so that was the model for "dynamic web pages" that I understood, but I wasn't on Windows5 and so I built it myself. No idea if it still works and I very much doubt it since it's old enough to buy all the drinks these days.

After that it was Movable Type for a bit and then, because I'd discovered the idea of funky caching6 it was Vellum, that model (a) in Python and (b) written by me. Then for a while it was "Thort", which was based on CouchDB7, and then it was WordPress, and then in 2014 I switched from WP to a static build based on Pelican, which it still is to this day. Crikey, that was over ten years ago!8 I like static site generators: I even wrote 10 Popular Static Site Generators a few years ago for WebsiteSetup which I think is still pretty good.

How do you write your posts? For example, in a local editing tool, or in a panel/dashboard that’s part of your blog?

In my text editor, which is Sublime Text. The static setup is here on my machine; I write a post, I type make kryogenix, and it runs a whole little series of scripts which invoke Pelican to build the static HTML for the blog, do a few things that I've added (such as add footnote handling9, make og:image links and images10, and sort of handle webmentions but that's broken at the moment) and then copy it up to my actual website (via git) to be published.

It's all a bit lashed together, to be honest, but this whole website is like that. It is something like an ancient city, such as London or Rome; what this site is mostly built on is the ruins of the previous history of the city. Sometimes the older bits poke through because they're still actually OK, or they never got updated; sometimes they've been replaced with the new shiny. You should see the .htaccess file, which operates a bewildering set of redirects through about six different generations of URLs so all the old links still work.11

When do you feel most inspired to write?

When the muse seizes me. Sometimes that's a lot; sometimes not. I do quite a lot of paid writing as part of my various day jobs for others, and quite a lot of creative writing as part of running a play-by-post D&D campaign, and that sucks up a reasonable amount of the writing energy, but there are things which just demand going on the website. Normally these days it's things where I want them to be a reference of some kind -- maybe of a useful tech thing, or some important thought, or something interesting -- for myself or for others.

Alternatively you might think the answer is "while in the pub, which leads to making random notes in an email to myself from my phone and then writing a blog post when I get home" and while this is not true, it's not not true either. I do not want to do a histogram of posting times from this site because I am worried that I will find that the majority are at, like, 11.15pm.

Do you publish immediately after writing, or do you let it simmer a bit as a draft?

Always post immediately. I have discovered about myself that, for semi-ephemeral stuff like posts here or projects that I do for fun, that I need to get them done as part of that initial burst of inspiration and energy. If I don't get it done, then my enthusiasm will fade and they will linger half-finished for ever and never get completed. I don't necessarily like this, but I've learned to live with it. If I think of an idea for a post and write a note about it and then don't do it, when I rediscover the note a week later it will not seem anything like as compelling. So posts are mostly written as one long stream-of-consciousness to capitalise on the burning of the creative fire before it gets doused by time or work or everything going on in the world. Carpe diem, I guess.12

What’s your favourite post on your blog?

Maybe It's Cold Outside, or Monkey Island 2, for about the fifth time, or Charles Paget Wade and the Underthing for writing, although each of them have little burrs in the wording that I want to polish when I re-read them. The series of birthday posts have been going on since the beginning, one every year, which probably wins for consistency. For technical stuff, maybe Some thoughts on soonsnap and little big details (now sadly defunct) or The thing and the whole of the thing: on DRM in HTML. I like my own writing, mostly. Arrogant, I know.

Any future plans for your blog? Maybe a redesign, a move to another platform, or adding a new feature?

Not really at the moment, but, as above, these things tend to arrive in a blizzard of excitement and implementation and then linger forever once done. But right now... it all seems to work OK. Ask me when I get back from the pub.

Next?

Well, I should probably point back at some of the people who inspired me to do this or other things and keep doing so to this day. So Simon, Remy, and Bruce, perhaps!

  1. In my defence, it was my birthday.
  2. although no longer at simon.incutio.com -- what even was Incutio?
  3. I resisted the word "blog" for a long time, calling it a "weblog", and the activity being "weblogging", because "blog" is such an ugly word. Like most of the fights I was picking in the mid 2000s, this also seems faintly antiquated and passé now. Sic transit gloria mundi and all that.
  4. or "nihil sub sole novum", since we're doing Latin quotes today
  5. and Windows's relationship with Python has always been a bit unsteady, although it's better these days now that Microsoft are prepared to acknowledge that other people can have ideas
  6. you write the pages in an online form, but then a server process builds a static HTML version of them; the advanced version of this where pages were only built on request was called "funky caching" back then
  7. if a disinterested observer were to consider this progression, they might unfairly but accurately conclude that whatever this site runs on is basically a half-arsed system I built based on the latest thing I'm interested in, mightn't they?
  8. tempus fugit. OK, I'll stop now.
  9. like this!
  10. an idea I stole shamelessly from Zach Leatherman
  11. Outgoing links are made to continue to work via unrot.link from the excellent Remy Sharp
  12. I was lying about not doing this any more, obviously
on February 03, 2025 07:17 PM

January 27, 2025

Announcing Incus 6.9

Stéphane Graber

The Incus team is pleased to announce the release of Incus 6.9!

This is a bit of a lighter release given the holiday break, but it features some nice feature additions on top of the usual health dose of bugfixes.

The highlights for this release are:

  • Instance network ACLs on bridge networks
  • Enhancements to QEMU scriptlet
  • VM memory dumps
  • Uplink addresses in OVN network state
  • Creation of storage volumes through server preseed file
  • Setting description in create commands

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

Some of the Incus maintainers will be present at FOSDEM 2025, helping run both the containers and kernel devrooms. For those arriving in town early, there will be a “Friends of Incus” gathering sponsored by FuturFusion on Thursday evening (January 30th), you can find the details of that here.

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

on January 27, 2025 06:19 PM

January 24, 2025

Fixed a major crash bug in our apps that use webengine, I also went ahead and updated these to core24 https://bugs.launchpad.net/snapd/+bug/2095418 andhttps://bugs.kde.org/show_bug.cgi?id=498663

Fixed okular
Can’t import certificates to digitally sign in Okular https://bugs.kde.org/show_bug.cgi?id=498558 Can’t open files https://bugs.kde.org/show_bug.cgi?id=421987 and https://bugs.kde.org/show_bug.cgi?id=415711

Skanpage won’t launch https://bugs.kde.org/show_bug.cgi?id=493847 in –edge please help test.

Ghostwriter https://bugs.kde.org/show_bug.cgi?id=481258

Kalm - Breathing techniques

New KDE Snaps!

Kalm – Breathing techniques

Telly-skout – Display TV guides

Kubuntu: Plasma 5.27.12 has been uploaded to archive –proposed and should make the .2 release!

I hate asking but I am unemployable with this broken arm fiasco. If you could spare anything it would be appreciated! https://gofund.me/573cc38e

on January 24, 2025 08:00 PM

January 19, 2025

For several years, DigitalOcean has been an important sponsor of Ubuntu Budgie. They provide the infrastructure we need to host our website at https://ubuntubudgie.org and our Discourse community forum at https://discourse.ubuntubudgie.org. Maybe you are familiar with them. Maybe you use them in your personal or professional life. Or maybe, like me, you didn’t really see how they would benefit you.

Source

on January 19, 2025 05:27 PM

January 09, 2025

TL;DR

Try the following lines in your custom udev rules, e.g.
/etc/udev/rules.d/99-local-disable-wakeup-events.rules

KERNEL=="i2c-ELAN0676:00", SUBSYSTEM=="i2c", DRIVERS=="i2c_hid_acpi", ATTR{power/wakeup}="disabled"
KERNEL=="PNP0C0E:00", SUBSYSTEM=="acpi", DRIVERS=="button", ATTRS{path}=="\_SB_.SLPB", ATTR{power/wakeup}="disabled"
Table of Contents

The motivation

Whenever something touches the red cap, the system wakes up from suspend/s2idle.
Whenever something touches the red cap, the system wakes up from suspend/s2idle.

I’ve used ThinkPad T14 Gen 3 AMD for 2 years, and I recently purchased T14 Gen 5 AMD. The previous system as Gen 3 annoyed me so much because the laptop randomly woke up from suspend even inside a backpack on its own, heated up the confined air in it, and drained the battery pretty fast as a consequence. Basically it’s too sensitive to any events. For example, whenever a USB Type-C cable is plugged in as a power source or whenever something touches the TrackPoint even if a display on a closed lid slightly makes contact with the red cap, the system wakes up from suspend. It was uncontrollable.

I was hoping that Gen 5 would make a difference, and it did when it comes to the power source event. However, frequent wakeups due to the TrackPoint event remained the same so I started to dig in.

Disabling touchpad as a wakeup source on T14 Gen 5 AMD

Disabling touchpad events as a wakeup source is straightforward. The touchpad device, ELAN0676:00 04F3:3195 Touchpad, can be found in the udev device tree as follows.

$ udevadm info --tree
...

 └─input/input12
   ┆ P: /devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12
   ┆ M: input12
   ┆ R: 12
   ┆ U: input
   ┆ E: DEVPATH=/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12
   ┆ E: SUBSYSTEM=input
   ┆ E: PRODUCT=18/4f3/3195/100
   ┆ E: NAME="ELAN0676:00 04F3:3195 Touchpad"
   ┆ E: PHYS="i2c-ELAN0676:00"

And you can get all attributes including parent devices like the following.

$ udevadm info --attribute-walk -p /devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12
...

  looking at device '/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12':
    KERNEL=="input12"
    SUBSYSTEM=="input"
    DRIVER==""
    ...
    ATTR{name}=="ELAN0676:00 04F3:3195 Touchpad"
    ATTR{phys}=="i2c-ELAN0676:00"

...

  looking at parent device '/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00':
    KERNELS=="i2c-ELAN0676:00"
    SUBSYSTEMS=="i2c"
    DRIVERS=="i2c_hid_acpi"
    ATTRS{name}=="ELAN0676:00"
    ...
    ATTRS{power/wakeup}=="enabled"

The line I’m looking for is ATTRS{power/wakeup}=="enabled". By using the identifiers of the parent device that has ATTRS{power/wakeup}, I can make sure that /sys/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/power/wakeup is always disabled with the custom udev rule as follows.

KERNEL=="i2c-ELAN0676:00", SUBSYSTEM=="i2c", DRIVERS=="i2c_hid_acpi", ATTR{power/wakeup}="disabled"

Disabling TrackPoint as a wakeup source on T14 Gen 5 AMD

I’ve seen a pattern already as above so I should be able to apply the same method. The TrackPoint device, TPPS/2 Elan TrackPoint, can be found in the udev device tree.

$ udevadm info --tree
...

 └─input/input5
   ┆ P: /devices/platform/i8042/serio1/input/input5
   ┆ M: input5
   ┆ R: 5
   ┆ U: input
   ┆ E: DEVPATH=/devices/platform/i8042/serio1/input/input5
   ┆ E: SUBSYSTEM=input
   ┆ E: PRODUCT=11/2/a/63
   ┆ E: NAME="TPPS/2 Elan TrackPoint"
   ┆ E: PHYS="isa0060/serio1/input0"

And the information of parent devices too.

$ udevadm info --attribute-walk -p /devices/platform/i8042/serio1/input/input5
...

  looking at device '/devices/platform/i8042/serio1/input/input5':
    KERNEL=="input5"
    SUBSYSTEM=="input"
    DRIVER==""
    ...
    ATTR{name}=="TPPS/2 Elan TrackPoint"
    ATTR{phys}=="isa0060/serio1/input0"

...

  looking at parent device '/devices/platform/i8042/serio1':
    KERNELS=="serio1"
    SUBSYSTEMS=="serio"
    DRIVERS=="psmouse"
    ATTRS{bind_mode}=="auto"
    ATTRS{description}=="i8042 AUX port"
    ATTRS{drvctl}=="(not readable)"
    ATTRS{firmware_id}=="PNP: LEN0321 PNP0f13"
    ...
    ATTRS{power/wakeup}=="disabled"

I hit the wall here. ATTRS{power/wakeup}=="disabled" for the i8042 AUX port is already there but the TrackPoint still wakes up the system from suspend. I had to do bisecting for all remaining wakeup sources.

The list of the remaining wakeup sources

$ cat /proc/acpi/wakeup
Device	S-state	  Status   Sysfs node
GPP0	  S0	*disabled
GPP2	  S3	*disabled
GPP5	  S0	*enabled   pci:0000:00:02.1
GPP6	  S4	*enabled   pci:0000:00:02.2
GP11	  S4	*enabled   pci:0000:00:03.1
SWUS	  S4	*disabled
GP12	  S4	*enabled   pci:0000:00:04.1
SWUS	  S4	*disabled
XHC0	  S3	*enabled   pci:0000:c4:00.3
XHC1	  S4	*enabled   pci:0000:c4:00.4
XHC2	  S4	*disabled  pci:0000:c6:00.0
NHI0	  S3	*enabled   pci:0000:c6:00.5
XHC3	  S3	*enabled   pci:0000:c6:00.3
NHI1	  S4	*enabled   pci:0000:c6:00.6
XHC4	  S3	*enabled   pci:0000:c6:00.4
LID	  S4	*enabled   platform:PNP0C0D:00
SLPB	  S3	*enabled   platform:PNP0C0E:00
 Wakeup sources:
 │  [/sys/devices/platform/USBC000:00/power_supply/ucsi-source-psy-USBC000:001/wakeup66]: enabled
 │  [/sys/devices/platform/USBC000:00/power_supply/ucsi-source-psy-USBC000:002/wakeup67]: enabled
 │ ACPI Battery [PNP0C0A:00]: enabled
 │ ACPI Lid Switch [PNP0C0D:00]: enabled
 │ ACPI Power Button [PNP0C0C:00]: enabled
 │ ACPI Sleep Button [PNP0C0E:00]: enabled
 │ AT Translated Set 2 keyboard [serio0]: enabled
 │ Advanced Micro Devices, Inc. [AMD] ISA bridge [0000:00:14.3]: enabled
 │ Advanced Micro Devices, Inc. [AMD] Multimedia controller [0000:c4:00.5]: enabled
 │ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.1]: enabled
 │ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.2]: enabled
 │ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:03.1]: enabled
 │ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:04.1]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c4:00.3]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c4:00.4]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.3]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.4]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.5]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.6]: enabled
 │ Mobile Broadband host interface [mhi0]: enabled
 │ Plug-n-play Real Time Clock [00:01]: enabled
 │ Real Time Clock alarm timer [rtc0]: enabled
 │ Thunderbolt domain [domain0]: enabled
 │ Thunderbolt domain [domain1]: enabled
 │ USB4 host controller [0-0]: enabled
 └─USB4 host controller [1-0]: enabled

Somehow, disabling SLPB “ACPI Sleep Button” stopped undesired wakeups by the TrackPoint.

  looking at parent device '/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00':
    KERNELS=="PNP0C0E:00"
    SUBSYSTEMS=="acpi"
    DRIVERS=="button"
    ATTRS{hid}=="PNP0C0E"
    ATTRS{path}=="\_SB_.SLPB"
    ...
    ATTRS{power/wakeup}=="enabled"

The final udev rule is the following. It also disables wakeup events from the keyboard as a side effect, but opening the lid or pressing the power button can still wake up the system so it works for me.

KERNEL=="PNP0C0E:00", SUBSYSTEM=="acpi", DRIVERS=="button", ATTRS{path}=="\_SB_.SLPB", ATTR{power/wakeup}="disabled"

In the case of ThinkPad T14 Gen 3 AMD

After solving the headache of frequent wakeups for T14 Gen5 AMD. I was curious if I could apply the same to Gen 3 AMD retrospectively. Gen 3 has the following wakeup sources active out of the box.

 Wakeup sources:
 │ ACPI Battery [PNP0C0A:00]: enabled
 │ ACPI Lid Switch [PNP0C0D:00]: enabled
 │ ACPI Power Button [LNXPWRBN:00]: enabled
 │ ACPI Power Button [PNP0C0C:00]: enabled
 │ ACPI Sleep Button [PNP0C0E:00]: enabled
 │ AT Translated Set 2 keyboard [serio0]: enabled
 │ Advanced Micro Devices, Inc. [AMD] ISA bridge [0000:00:14.3]: enabled
 │ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.1]: enabled
 │ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.2]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:04:00.3]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:04:00.4]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.0]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.3]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.4]: enabled
 │ ELAN0678:00 04F3:3195 Mouse [i2c-ELAN0678:00]: enabled
 │ Mobile Broadband host interface [mhi0]: enabled
 │ Plug-n-play Real Time Clock [00:01]: enabled
 └─Real Time Clock alarm timer [rtc0]: enabled

Disabling the touchpad event was straightforward. The only difference from Gen 5 was the ID of the device.

KERNEL=="i2c-ELAN0678:00", SUBSYSTEM=="i2c", DRIVERS=="i2c_hid_acpi", ATTR{power/wakeup}="disabled"

When it comes to the TrackPoint or power source event, nothing was able to stop it from waking up the system even after disabling all wakeup sources. I came across a hidden gem named amd_s2idle.py. The “S0i3/s2idle analysis script for AMD systems” is full with the domain knowledge of s2idle like where to look in /proc or /sys or how to enable debug and what part of the logs is important.

By running the script, I got the following output around the unexpected wakeup.

$ sudo python3 ./amd_s2idle.py --debug-ec --duration 30
Debugging script for s2idle on AMD systems
💻 LENOVO 21CF21CFT1 (ThinkPad T14 Gen 3) running BIOS 1.56 (R23ET80W (1.56 )) released 10/28/2024 and EC 1.32
🐧 Ubuntu 24.04.1 LTS
🐧 Kernel 6.11.0-12-generic
🔋 Battery BAT0 (Sunwoda ) is operating at 90.91% of design
Checking prerequisites for s2idle
✅ Logs are provided via systemd
✅ AMD Ryzen 7 PRO 6850U with Radeon Graphics (family 19 model 44)
...

Suspending system in 0:00:02
Suspending system in 0:00:01

Started at 2025-01-04 00:46:53.063495 (cycle finish expected @ 2025-01-04 00:47:27.063532)
Collecting data in 0:00:02
Collecting data in 0:00:01

Results from last s2idle cycle
💤 Suspend count: 1
💤 Hardware sleep cycle count: 1
○ GPIOs active: ['0']
🥱 Wakeup triggered from IRQ 9: ACPI SCI
🥱 Wakeup triggered from IRQ 7: GPIO Controller
🥱 Woke up from IRQ 7: GPIO Controller
❌ Userspace suspended for 0:00:14.031448 (< minimum expected 0:00:27)
💤 In a hardware sleep state for 0:00:10.566894 (75.31%)
🔋 Battery BAT0 lost 10000 µWh (0.02%) [Average rate 2.57W]
Explanations for your system
🚦 Userspace wasn't asleep at least 0:00:30
        The system was programmed to sleep for 0:00:30, but woke up prematurely.
        This typically happens when the system was woken up from a non-timer based source.

        If you didn't intentionally wake it up, then there may be a kernel or firmware bug

I compared all the logs generated between the events of power button, power source, TrackPoint, and touchpad. But except for the touchpad event, everything else was coming from GPIO pin #0 and there was no more information of how to distinguish those wakeup triggers. I ended up with a drastic approach of ignoring wakeup triggers from the GPIO pin #0 completely with the following kernel option.

gpiolib_acpi.ignore_wake=AMDI0030:00@0

And I get the line on each boot.

kernel: amd_gpio AMDI0030:00: Ignoring wakeup on pin 0

That comes with obvious downsides. The system doesn’t wake up frequently any longer, that is good. However, nothing can wake it up after getting into suspend. Opening the lid, pressing the power button or any key is simply ignored since all are going to GPIO pin #0. In the end, I had to enable the touchpad back as a wakeup source explicitly so the system can wakeup by tapping the touchpad. It’s far from ideal, but the touchpad is less sensitive than the TrackPoint so I will keep it that way.

KERNEL=="i2c-ELAN0678:00", SUBSYSTEM=="i2c", DRIVERS=="i2c_hid_acpi", ATTR{power/wakeup}="enabled"

I guess the limitation is coming from a firmware more or less, but at the same time I don’t expect fixes for the few year old model.

References

on January 09, 2025 02:50 PM

December 31, 2024

Bit of the why

So often I come across the need to avoid my system to block forever, or until a process finishes, I can’t recall how did I came across systemd inhibit, but here’s my approach and a bit of motivation

Motivation

I noticed that the Gnome Settings, come with Rygel

After some fiddling (not much really), it starts directly once I login and I will be using it instead of a fully fledged plex or the like, I just want to stream some videos from time to time from my home pc over my ipad :D using VLC.

The Hack

systemd-inhibit --who=foursixnine --why="maybe there be dragons" --mode block \
    bash -c 'while $(systemctl --user is-active -q rygel.service); do sleep 1h; done'

One can also use waitpid and more.

Thank you for comming to my ted talk.

on December 31, 2024 12:00 AM

December 21, 2024

Thug Life

Benjamin Mako Hill

My current playlist is this diorama of Lulu the Piggy channeling Tupac Shakur in a toy vending machine in the basement of New World Mall in Flushing Chinatown.

on December 21, 2024 11:06 PM

December 19, 2024

Being a bread torus

Benjamin Mako Hill

A concerned nutritional epidemiologist in Tokyo realizes that if you are what you eat, that means…

It’s a similar situation in Seoul, albeit with less oil and more confidence.

on December 19, 2024 02:49 AM

December 18, 2024

Last week I was bitten by a interesting C feature. The following terminate function was expected to exit if okay was zero (false) however it exited when zero was passed to it. The reason is the missing semicolon after the return function.

 

The interesting part this that is compiles fine because the void function terminate is allowed to return the void return value, in this case the void return from exit().

 

on December 18, 2024 05:43 PM

December 14, 2024

OCI (open container initiative) images are the standard format based on
the original docker format. Each container image is represented as an
array of ‘layers’, each of which is a .tar.gz. To unpack the container
image, untar the first, then untar the second on top of the first, etc.

Several years ago, while we were working on a product which ships its
root filesystem (and of course containers) as OCI layers, Tycho Andersen
(https://tycho.pizza/) came up with the idea of ‘atomfs’ as a way to
avoid some of the deficiencies of tar
(https://www.cyphar.com/blog/post/20190121-ociv2-images-i-tar). In
‘atomfs’, the .tar.gz layers are replaced by squashfs (now optionally
erofs) filesystems with dm-verity root hashes specified. Mounting an
image now consists of mounting each squashfs, then merging them with
overlay. Since we have the dmverity root hash, we can ensure that the
filesystem has not been corrupted without having to checksum the files
before mounting, and there is no tar unpacking step.

This past week, Ram Chinchani presented atomfs at the OCI weekly
discussion, which you can see here
https://www.youtube.com/watch?v=CUyH319O9hM starting at about 28
minutes. He showed a full use cycle, starting with a Dockerfile,
building atomfs images using stacker, mounting them using atomfs, and
then executing a container with lxc. Ram mentioned his goal is to have
a containerd snapshotter for atomfs soon. I’m excited to hear that, as
it will make it far easier to integrate into e.g. kubernetes.

Exciting stuff!
on December 14, 2024 03:52 AM

December 11, 2024

I’m pleased to introduce uCareSystem 24.12.11, the latest version of the all-in-one system maintenance tool for Ubuntu, Linux Mint, Debian and its derivatives. This release brings some major changes in UI, fixes and improvements under the hood. Continuing on the path of the earlier release, in this release after many many … many … did […]
on December 11, 2024 01:10 PM

December 03, 2024

The new feature bug templates in Launchpad aims to streamline the bug reporting process, making it more efficient for both users and project maintainers.

In the past, Launchpad provided only a basic description field for filling bug reports. This often led to incomplete or vague submissions, as users may not include essential details or steps to reproduce an issue. This could slow down the debugging process when fixing bugs. 

To improve this, we are introducing bug templates. These allow project maintainers to guide users when reporting bugs. By offering a structured template, users are prompted to provide all the necessary information, which helps to speed up the development process.

To start using bug templates in your project, simply follow these steps:

  • Access your project’s bug page view.
  • Select ‘Configure bugs’.
  • A field showing the bug template will prompt you to fill in your desired template.
  • Save the changes. The template will now be available to users when they report a new bug for your project.

For now, only a default bug template can be set per project. Looking ahead, the idea is to expand this by introducing multiple bug templates per project, as well as templates for other content types such as merge proposals or answers. This will allow project maintainers to define various templates for different purposes, making the open-source collaboration process even more efficient.

Additionally, we will introduce Markdown support, allowing maintainers to create structured and visually clear templates using features such as headings, lists, or code blocks.

on December 03, 2024 12:58 PM

November 17, 2024

I’m pleased to introduce uCareSystem 24.11.17, the latest version of the all-in-one system maintenance tool. This release brings some minor fixes and improvements with visual changes that you will love. I’m excited to share the details of the latest update to uCareSystem! With this release, the focus is on refining the user experience and modernizing […]
on November 17, 2024 12:18 AM

November 12, 2024

Complex for Whom?

Paul Tagliamonte

In basically every engineering organization I’ve ever regarded as particularly high functioning, I’ve sat through one specific recurring conversation which is not – a conversation about “complexity”. Things are good or bad because they are or aren’t complex, architectures needs to be redone because it’s too complex – some refactor of whatever it is won’t work because it’s too complex. You may have even been a part of some of these conversations – or even been the one advocating for simple light-weight solutions. I’ve done it. Many times.

Rarely, if ever, do we talk about complexity within its rightful context – complexity for whom. Is a solution complex because it’s complex for the end user? Is it complex if it’s complex for an API consumer? Is it complex if it’s complex for the person maintaining the API service? Is it complex if it’s complex for someone outside the team maintaining it to understand? Complexity within a problem domain I’ve come to believe, is fairly zero-sum – there’s a fixed amount of complexity in the problem to be solved, and you can choose to either solve it, or leave it for those downstream of you to solve that problem on their own.

That being said, while I believe there is a lower bound in complexity to contend with for a problem, I do not believe there is an upper bound to the complexity of solutions possible. It is always possible, and in fact, very likely that teams create problems for themselves while trying to solve a problem. The rest of this post is talking to the lower bound. When getting feedback on an early draft of this blog post, I’ve been informed that Fred Brooks coined a term for what I call “lower bound complexity” – “Essential Complexity”, in the paper “No Silver Bullet—Essence and Accident in Software Engineering”, which is a better term and can be used interchangeably.

Complexity Culture

In a large enough organization, where the team is high functioning enough to have and maintain trust amongst peers, members of the team will specialize. People will begin to engage with subsets of the work to be done, and begin to have their efficacy measured against that part of the organization’s problems. Incentives shift, and over time it becomes increasingly likely that two engineers may have two very different priorities when working on the same system together. Someone accountable for uptime and tasked with responding to outages will begin to resist changes. Someone accountable for rapidly delivering features will resist gates between them and their users. Companies (either wittingly or unwittingly) will deal with this by tasking engineers with both production (feature development) and operational tasks (maintenance), so the difference in incentives isn’t usually as bad as it could be.

When we get a bunch of folks from far-flung corners of an organization in a room, fire up a slide deck and throw up some aspirational to-be architecture diagram in order to get a sign-off to solve some problem (be it someone needs a credible promotion packet, new feature needs to get delivered, or the system has begun to fail and needs fixing), the initial reaction will, more often than I’d like, start to devolve into a discussion of how this is going to introduce a bunch of complexity, going to be hard to maintain, why can’t you make it less complex?

Right around here is when I start to try and contextualize the conversation happening around me – understand what complexity is that being discussed, and understand who is taking on that burden. Think about who should be owning that problem, and work through the tradeoffs involved. Is it best solved here, or left to consumers (be them other systems, developers, or users). Should something become an API call’s optional param, taking on all the edge-cases and on, or should users have to implement the logic using the data you return (leaving everyone else to take on all the edge-cases and maintenance)? Should you process the data, or require the user to preprocess it for you?

Frequently it’s right to make an active and explicit decision to simplify and leave problems to be solved downstream, since they may not actually need to be solved – or perhaps you expect consumers will want to own the specifics of how the problem is solved, in which case you leave lots of documentation and examples. Many other times, especially when it’s something downstream consumers are likely to hit, it’s best solved internal to the system, since the only thing that can come of leaving it unsolved are bugs, frustration and half-correct solutions. This is a grey-space of tradeoffs, not a clear decision tree. No one wants the software manifestation of a katamari ball or a junk drawer, nor does anyone want a half-baked service unable to handle the simplest use-case.

Head-in-sand as a Service

Popoffs about how complex something is, are, to a first approximation, best understood as meaning “complicated for the person making comments”. A lot of the #thoughtleadership believe that an AWS hosted EKS k8s cluster running images built by CI talking to an AWS hosted PostgreSQL RDS is not complex. They’re right. Mostly right. This is less complex – less complex for them. It’s not, however, without complexity and its own tradeoffs – it’s just complexity that they do not have to deal with. Now they don’t have to maintain machines that have pesky operating systems or hard drive failures. They don’t have to deal with updating the version of k8s, nor ensuring the backups work. No one has to push some artifact to prod manually. Deployments happen unattended. You click a button and get a cluster.

On the other hand, developers outside the ops function need to deal with troubleshooting CI, debugging access control rules encoded in turing complete YAML, permissions issues inside the cluster due to whatever the fuck a service mesh is, everyone needs to learn how to use some k8s tools they only actually use during a bad day, likely while doing some x.509 troubleshooting to connect to the cluster (an internal only endpoint; just port forward it) – not to mention all sorts of rules to route packets to their project (a single repo’s binary being run in 3 containers on a single vm host).

Beyond that, there’s the invisible complexity – complexity on the interior of a service you depend on. I think about the dozens of teams maintaining the EKS service (which is either run on EC2 instances, or alternately, EC2 instances in a trench coat, moustache and even more shell scripts), the RDS service (also EC2 and shell scripts, but this time accounting for redundancy, backups, availability zones), scores of hypervisors pulled off the shelf (xen, kvm) smashed together with the ones built in-house (firecracker, nitro, etc) running on hardware that has to be refreshed and maintained continuously. Every request processed by network ACL rules, AWS IAM rules, security group rules, using IP space announced to the internet wired through IXPs directly into ISPs. I don’t even want to begin to think about the complexity inherent in how those switches are designed. Shitloads of complexity to solve problems you may or may not have, or even know you had.

What’s more complex? An app running in an in-house 4u server racked in the office’s telco closet in the back running off the office Verizon line, or an app running four hypervisors deep in an AWS datacenter? Which is more complex to you? What about to your organization? In total? Which is more prone to failure? Which is more secure? Is the complexity good or bad? What type of Complexity can you manage effectively? Which threaten the system? Which threaten your users?

COMPLEXIVIBES

This extends beyond Engineering. Decisions regarding “what tools are we able to use” – be them existing contracts with cloud providers, CIO mandated SaaS products, a list of the only permissible open source projects – will incur costs in terms of expressed “complexity”. Pinning open source projects to a fixed set makes SBOM production “less complex”. Using only one SaaS provider’s product suite (even if its terrible, because it has all the types of tools you need) makes accreditation “less complex”. If all you have is a contract with Pauly T’s lowest price technically acceptable artisinal cloudary and haberdashery, the way you pay for your compute is “less complex” for the CIO shop, though you will find yourself building your own hosted database template, mechanism to spin up a k8s cluster, and all the operational and technical burden that comes with it. Or you won’t and make it everyone else’s problem in the organization. Nothing you can do will solve for the fact that you must now deal with this problem somewhere because it was less complicated for the business to put the workloads on the existing contract with a cut-rate vendor.

Suddenly, the decision to “reduce complexity” because of an existing contract vehicle has resulted in a huge amount of technical risk and maintenance burden being onboarded. Complexity you would otherwise externalize has now been taken on internally. With large enough organizations (specifically, in this case, I’m talking about you, bureaucracies), this is largely ignored or accepted as normal since the personnel cost is understood to be free to everyone involved. Doing it this way is more expensive, more work, less reliable and less maintainable, and yet, somehow, is, in a lot of ways, “less complex” to the organization. It’s particularly bad with bureaucracies, since screwing up a contract will get you into much more trouble than delivering a broken product, leaving basically no reason for anyone to care to fix this.

I can’t shake the feeling that for every story of technical mandates gone awry, somewhere just out of sight there’s a decisionmaker optimizing for what they believe to be the least amount of complexity – least hassle, fewest unique cases, most consistency – as they can. They freely offload complexity from their accreditation and risk acceptance functions through mandates. They will never have to deal with it. That does not change the fact that someone does.

TC;DR (TOO COMPLEX; DIDN’T REVIEW)

We wish to rid ourselves of systemic Complexity – after all, complexity is bad, simplicity is good. Removing upper-bound own-goal complexity (“accidental complexity” in Brooks’s terms) is important, but once you hit the lower bound complexity, the tradeoffs become zero-sum. Removing complexity from one part of the system means that somewhere else - maybe outside your organization or in a non-engineering function - must grow it back. Sometimes, the opposite is the case, such as when a previously manual business processes is automated. Maybe that’s a good idea. Maybe it’s not. All I know is that what doesn’t help the situation is conflating complexity with everything we don’t like – legacy code, maintenance burden or toil, cost, delivery velocity.

  • Complexity is not the same as proclivity to failure. The most reliable systems I’ve interacted with are unimaginably complex, with layers of internal protection to prevent complete failure. This has its own set of costs which other people have written about extensively.
  • Complexity is not cost. Sometimes the cost of taking all the complexity in-house is less, for whatever value of cost you choose to use.
  • Complexity is not absolute. Something simple from one perspective may be wildly complex from another. The impulse to burn down complex sections of code is helpful to have generally, but sometimes things are complicated for a reason, even if that reason exists outside your codebase or organization.
  • Complexity is not something you can remove without introducing complexity elsewhere. Just as not making a decision is a decision itself; choosing to require someone else to deal with a problem rather than dealing with it internally is a choice that needs to be considered in its full context.

Next time you’re sitting through a discussion and someone starts to talk about all the complexity about to be introduced, I want to pop up in the back of your head, politely asking what does complex mean in this context? Is it lower bound complexity? Is this complexity desirable? Is what they’re saying mean something along the lines of I don’t understand the problems being solved, or does it mean something along the lines of this problem should be solved elsewhere? Do they believe this will result in more work for them in a way that you don’t see? Should this not solved at all by changing the bounds of what we should accept or redefine the understood limits of this system? Is the perceived complexity a result of a decision elsewhere? Who’s taking this complexity on, or more to the point, is failing to address complexity required by the problem leaving it to others? Does it impact others? How specifically? What are you not seeing?

What can change?

What should change?

on November 12, 2024 08:21 PM

October 20, 2024

I am using pretty much the exact same setup I did in 2020. Let's see who is more efficient in a live session!

But first let's take a look at the image sizes:

>>Image size (in G)001122334455UbuntuXubuntuXubuntu-minimalKubuntuLubuntuUbuntu MateManjaro 24.1 (KDE)Linux Mint 22 (Cinnamon)Fedora 40 (Gnome)Endless OS 65.840.565286906228884237.3745496805519Ubuntu3.998.51569677227016312.14438462634905Xubuntu2.5156.46610663831143367.2379472179891Xubuntu-minimal4.1214.4165165043527304.27387568468623Kubuntu3.1272.36692637039397343.62642039300044Lubuntu4330.31733623643527308.20913015551764Ubuntu Mate3.9388.2677461024765312.14438462634905Manjaro 24.1 (KDE)2.8446.21815596851775355.4321838054948Linux Mint 22 (Cinnamon)2.2504.1685658345591379.04371063048336Fedora 40 (Gnome)3.9562.1189757006003312.14438462634905Endless OS 6Image size (in G)

Charge Open Movie is what I viewed if I can make it to YouTube.

I decided to be more selective and remove those that did very porly at 1.5G, which was most.

  • Ubuntu - booted but desktop not stable, took 1.5 minutes to load Firefox
  • Xubuntu-minimal - does not include a web browser so can't further test. Snap is preinstaled even though no apps are - but trying to install a web browser worked but couldn't start.
  • Manjaro KDE - Desktop loads, but browser doesn't
  • Xubuntu - laggy when Firefox is opened, can't load sites
  • Ubuntu Mate -laggy when Firefox is opened, can't load sites
  • Kubuntu - laggy when Firefox is opened, can't load sites
  • Linux Mint 22 - desktop loads, browsers isn't responsive

>>Memory usage compared (in G)000.10.10.20.20.30.30.40.40.50.50.60.60.70.70.80.80.90.9111.11.11.21.21.31.31.41.4LubuntuEndless OS 6.0Fedora 400.4557.52699314991314372.0296569207792Lubuntu1273.2532174620874286.9854710078829Endless OS 6.00.7488.97944177426166333.3732087785536Fedora 400.9120.8066856148176302.4480502647731Lubuntu1336.5329099269918286.9854710078829Endless OS 6.01.1552.2591342391661271.5228917509926Fedora 401.1184.086378079722271.5228917509926Lubuntu1.3399.81260239189635240.5977332372121Endless OS 6.01.4615.5388267040705225.13515398032192Fedora 40Memory usage compared (in G)Desktop responsiveWeb browser loads simple siteYouTube worked fullscreen

Fedora video is a bit laggy, but watchable.. EndlessOS with Chromium is the most smooth and resonsive watching YouTube.

For fun let's look at startup time with 2GB (with me hitting buttons as needed to open a folder)

>>Startup time (Seconds)00101020203030404050506060707080809090LubuntuEndless OS 6.0Fedora 4033107.38104458917655401.2549765487598Lubuntu93299.13290992699183247.63515398032195Endless OS 6.045490.8847752648071370.53101203507225Fedora 40Startup time (Seconds)Seconds

Conclusion

  • Lubuntu lowered it's memory usage from 2020 for loading a desktop 585M to 450M! Kudos to Lubuntu team!
  • Both Fedora and Endless desktops worked in lower memory then 2020 too!
  • Lubuntu, Fedora and Endless all used Zram.
  • Chromium has definitely improved it's memory usage as last time Endless got dinged for using it. Now it appears to work better then Firefox.

Notes:

  • qemu-system-x86_64 -enable-kvm -cdrom lubuntu-24.04.1-desktop-amd64.iso -m 1.5G -smp 4 -cpu host -vga virtio --full-screen
  • Screen size was set to 1080p/60Hz.
  • I tried to reproduce 585M on Lubuntu 20.04 build, but it failed on anything below 1G.
  • Getting out of full screen on YouTube apparently is an intensive task. Dropped testing that.
  • All Ubuntu was 24.04.1 LTS.
on October 20, 2024 12:54 AM

October 15, 2024

Designed by Freepik

What is an “online” system?

Networking is a complex topic, and there is lots of confusion around the definition of an “online” system. Sometimes the boot process gets delayed up to two minutes, because the system still waits for one or more network interfaces to be ready. Systemd provides the network-online.target that other service units can rely on, if they are deemed to require network connectivity. But what does “online” actually mean in this context, is a link-local IP address enough, do we need a routable gateway and how about DNS name resolution?

The requirements for an “online” network interface depend very much on the services using an interface. For some services it might be good enough to reach their local network segment (e.g. to announce Zeroconf services), while others need to reach domain names (e.g. to mount a NFS share) or reach the global internet to run a web server. On the other hand, the implementation of network-online.target varies, depending on which networking daemon is in use, e.g. systemd-networkd-wait-online.service or NetworkManager-wait-online.service. For Ubuntu, we created a specification that describes what we as a distro expect an “online” system to be. Having a definition in place, we are able to tackle the network-online-ordering issues that got reported over the years and can work out solutions to avoid delayed boot times on Ubuntu systems.

In essence, we want systems to reach the following networking state to be considered online:

  1. Do not wait for “optional” interfaces to receive network configuration
  2. Have IPv6 and/or IPv4 “link-local” addresses on every network interface
  3. Have at least one interface with a globally routable connection
  4. Have functional domain name resolution on any routable interface

A common implementation

NetworkManager and systemd-networkd are two very common networking daemons used on modern Linux systems. But they originate from different contexts and therefore show different behaviours in certain scenarios, such as wait-online. Luckily, on Ubuntu we already have Netplan as a unification layer on top of those networking daemons, that allows for common network configuration, and can also be used to tweak the wait-online logic.

With the recent release of Netplan v1.1 we introduced initial functionality to tweak the behaviour of the systemd-networkd-wait-online.service, as used on Ubuntu Server systems. When Netplan is used to drive the systemd-networkd backend, it will emit an override configuration file in /run/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf, listing the specific non-optional interfaces that should receive link-local IP configuration. In parallel to that, it defines a list of network interfaces that Netplan detected to be potential global connections, and waits for any of those interfaces to reach a globally routable state.

Such override config file might look like this:

[Unit]
ConditionPathIsSymbolicLink=/run/systemd/generator/network-online.target.wants/systemd-networkd-wait-online.service

[Service]
ExecStart=
ExecStart=/lib/systemd/systemd-networkd-wait-online -i eth99.43:carrier -i lo:carrier -i eth99.42:carrier -i eth99.44:degraded -i bond0:degraded
ExecStart=/lib/systemd/systemd-networkd-wait-online --any -o routable -i eth99.43 -i eth99.45 -i bond0

In addition to the new features implemented in Netplan, we reached out to upstream systemd, proposing an enhancement to the systemd-networkd-wait-online service, integrating it with systemd-resolved to check for the availability of DNS name resolution. Once this is implemented upstream, we’re able to fully control the systemd-networkd backend on Ubuntu Server systems, to behave consistently and according to the definition of an “online” system that was lined out above.

Future work

The story doesn’t end there, because Ubuntu Desktop systems are using NetworkManager as their networking backend. This daemon provides its very own nm-online utility, utilized by the NetworkManager-wait-online systemd service. It implements a much higher-level approach, looking at the networking daemon in general instead of the individual network interfaces. By default, it considers a system to be online once every “autoconnect” profile got activated (or failed to activate), meaning that either a IPv4 or IPv6 address got assigned.

There are considerable enhancements to be implemented to this tool, for it to be controllable in a fine-granular way similar to systemd-networkd-wait-online, so that it can be instructed to wait for specific networking states on selected interfaces.

A note of caution

Making a service depend on network-online.target is considered an antipattern in most cases. This is because networking on Linux systems is very dynamic and the systemd target can only ever reflect the networking state at a single point in time. It cannot guarantee this state to be remained over the uptime of your system and has the potentially to delay the boot process considerably. Cables can be unplugged, wireless connectivity can drop, or remote routers can go down at any time, affecting the connectivity state of your local system. Therefore, “instead of wondering what to do about network.target, please just fix your program to be friendly to dynamically changing network configuration.” [source].

on October 15, 2024 07:33 AM

October 10, 2024

Xubuntu 24.10, "Oracular Oriole," is now available, featuring many updated applications from Xfce (4.18 and 4.19), GNOME (46 and 47), and MATE (1.26).

The post Xubuntu 24.10 Released appeared first on Sean Davis.

on October 10, 2024 09:19 PM

The Xubuntu team is happy to announce the immediate release of Xubuntu 24.10.

Xubuntu 24.10, codenamed Oracular Oriole, is a regular release and will be supported for 9 months, until July 2025.

Xubuntu 24.10, featuring the latest updates from Xfce 4.19 and GNOME 47.

Xubuntu 24.10 features the latest updates from Xfce 4.19, GNOME 47, and MATE 1.26. For Xfce enthusiasts, you’ll appreciate the new features and improved hardware support found in Xfce 4.19. Xfce 4.19 is the development series for the next release, Xfce 4.20, due later this year. As pre-release software, you may encounter more bugs than usual. Users seeking a stable, well-supported environment should opt for Xubuntu 24.04 “Noble Numbat” instead.

The final release images for Xubuntu Desktop and Xubuntu Minimal are available as torrents and direct downloads from xubuntu.org/download/.

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

We’d like to thank everybody who contributed to this release of Xubuntu!

Highlights and Known Issues

Highlights

  • Xfce 4.19 is included as a development preview of the upcoming Xfce 4.20. Among several new features, it features early Wayland support and improved scaling.
  • GNOME 47 apps, including Disk Usage Analyzer (baobab) and Sudoku (gnome-sudoku), include a refreshed appearance and usability improvements

Known Issues

  • The shutdown prompt may not be displayed at the end of the installation. Instead you might just see a Xubuntu logo, a black screen with an underscore in the upper left hand corner, or just a black screen. Press Enter and the system will reboot into the installed environment. (LP: #1944519)
  • Xorg crashes and the user is logged out after logging in or switching users on some virtual machines, including GNOME Boxes. (LP: #1861609)
  • You may experience choppy audio or poor system performance while playing audio, but only in some virtual machines (observed in VMware and VirtualBox)
  • OEM installation options are not currently supported or available, but will be included for Xubuntu 24.04.1

For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.

The main Ubuntu Release Notes cover many of the other packages we carry and more generic issues.

Support

For support with the release, navigate to Help & Support for a complete list of methods to get help.

on October 10, 2024 09:07 PM

The Kubuntu Team is happy to announce that Kubuntu 24.10 has been released, featuring the new and beautiful KDE Plasma 6.1 simple by default, powerful when needed.

Codenamed “Oracular Oriole”, Kubuntu 24.10 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

Under the hood, there have been updates to many core packages, including a new 6.11 based kernel, KDE Frameworks 5.116 and 6.6.0, KDE Plasma 6.1 and many updated KDE gear applications.

Kubuntu 24.10 with Plasma 6.1

Kubuntu has seen many updates for other applications, both in our default install, and installable from the Ubuntu archive.

Applications for core day-to-day usage are included and updated, such as Firefox, and LibreOffice.

For a list of other application updates, and known bugs be sure to read our release notes.

Wayland as default Plasma session.

The Plasma wayland session is now the default option in sddm (display manager login screen). An X11 session can be selected instead if desired. The last used session type will be remembered, so you do not have to switch type on each login.

Download Kubuntu 24.10, or learn how to upgrade from 24.04 LTS.

Note: For upgrades from 24.04, there may a delay of a few hours to days between the official release announcements and the Ubuntu Release Team enabling upgrades.

on October 10, 2024 03:05 PM