<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Security - Azalio</title>
	<atom:link href="https://www.azalio.io/category/security/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.azalio.io</link>
	<description>Your technology partner</description>
	<lastBuildDate>Tue, 02 Dec 2025 09:01:42 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.6.5</generator>

 
	<item>
		<title>Why data contracts need Apache Kafka and Apache Flink</title>
		<link>https://www.azalio.io/why-data-contracts-need-apache-kafka-and-apache-flink/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Tue, 02 Dec 2025 09:01:42 +0000</pubDate>
				<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://www.azalio.io/why-data-contracts-need-apache-kafka-and-apache-flink/</guid>

					<description><![CDATA[<p>Imagine it’s 3 a.m. and your pager goes off. A downstream service is failing, and after an hour of debugging you trace the issue to a tiny, undocumented schema change made by an upstream team. The fix is simple, but it comes with a high cost in lost sleep and operational downtime. This is the [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/why-data-contracts-need-apache-kafka-and-apache-flink/">Why data contracts need Apache Kafka and Apache Flink</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<div id="remove_no_follow">
<div class="grid grid--cols-10@md grid--cols-8@lg article-column">
<div class="col-12 col-10@md col-6@lg col-start-3@lg">
<div class="article-column__content">
<section class="wp-block-bigbite-multi-title">
<div class="container"></div>
</section>
<p>Imagine it’s 3 a.m. and your pager goes off. A downstream service is failing, and after an hour of debugging you trace the issue to a tiny, undocumented schema change made by an upstream team. The fix is simple, but it comes with a high cost in lost sleep and operational downtime.</p>
<p>This is the nature of many modern <a href="https://www.infoworld.com/article/3487711/the-definitive-guide-to-data-pipelines.html">data pipelines</a>. We’ve mastered the art of building distributed systems, but we’ve neglected a critical part of the system: the agreement on the data itself. This is where data contracts come in, and how they fail without the right tools to enforce them.</p>
<h2 class="wp-block-heading" id="the-importance-of-data-contracts">The importance of data contracts</h2>
<p>Data pipelines are a popular tool for sharing data from different producers (databases, applications, logs, microservices, etc.) to consumers to drive event-driven applications or enable further processing and analytics. These pipelines have often been developed in an ad hoc manner, without a formal specification for the data being produced and without direct input from the consumer on what data they expect. As a result, it’s not uncommon for upstream producers to introduce ad hoc changes consumers don’t expect and can’t process. The result? Operational downtime and expensive, time-consuming debugging to find the root cause.</p>
<p>Data contracts were developed to prevent this.</p>
<p>Data contract design requires data producers and consumers to collaborate early in the software design life cycle to define and refine requirements. Explicitly defining and documenting requirements early on simplifies pipeline design and reduces or removes errors in consumers caused by data changes not defined in the contract.</p>
<p>Data contracts are an agreement between data producers and consumers that define schemas, data types, and data quality constraints for data shared between them. Data pipelines leverage distributed software to map the flow of data and its transformation from producers to consumers. Data contracts are foundational to properly designed and well behaved data pipelines.</p>
<h2 class="wp-block-heading" id="why-we-need-data-contracts">Why we need data contracts</h2>
<p>Why should data contracts matter to developers and the business? First, data contracts reduce operational costs by eliminating unexpected upstream data changes that cause operational downtime.</p>
<p>Second, they reduce developer time spent on debugging and break-fixing errors. These errors are caused downstream from changes the developer introduced without understanding their effects on consumers. Data contracts provide this understanding. </p>
<p>Third, formal data contracts aid the development of well-defined, reusable data products that multiple consumers can leverage for analytics and applications.</p>
<p>The consumer and producer can leverage the data contract to define schema and other changes before the producer implements them. The data contract should specify a cutover process, so consumers can migrate to the new schema and its associated contract without disruption.</p>
<h2 class="wp-block-heading" id="three-important-data-contract-requirements">Three important data contract requirements</h2>
<p>Data contracts have garnered much interest recently, as enterprises realize the benefits of shifting their focus upstream to where data is produced when building operational products that are data-driven. This process is often called “<a href="https://www.confluent.io/blog/shift-left-bad-data-in-event-streams-part-1/">shift left</a>.”</p>
<div class="extendedBlock-wrapper block-coreImage undefined">
<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" src="http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-01.png" alt="Data contracts Kafka Flink 01" class="wp-image-4086068" srcset="http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-01.png?quality=50&amp;strip=all 1080w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-01.png?resize=300%2C225&amp;quality=50&amp;strip=all 300w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-01.png?resize=768%2C576&amp;quality=50&amp;strip=all 768w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-01.png?resize=1024%2C768&amp;quality=50&amp;strip=all 1024w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-01.png?resize=929%2C697&amp;quality=50&amp;strip=all 929w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-01.png?resize=224%2C168&amp;quality=50&amp;strip=all 224w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-01.png?resize=112%2C84&amp;quality=50&amp;strip=all 112w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-01.png?resize=640%2C480&amp;quality=50&amp;strip=all 640w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-01.png?resize=480%2C360&amp;quality=50&amp;strip=all 480w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-01.png?resize=333%2C250&amp;quality=50&amp;strip=all 333w" width="1024" height="768" sizes="auto, (max-width: 1024px) 100vw, 1024px"></figure>
<p class="imageCredit">Confluent</p>
</div>
<p>In a shift-left data pipeline design, downstream consumers can share their data product requirements with upstream data producers. These requirements can then be distilled and codified into the data contract.</p>
<p>Data contract adoption requires three key capabilities:</p>
<ul class="wp-block-list">
<li>Specification — define the data contract</li>
<li>Implementation — implement the data contract in the data pipeline</li>
<li>Enforcement — enforce the data contract in real-time</li>
</ul>
<p>There are a variety of technologies that can support these capabilities. However, Apache Kafka and Apache Flink are among the best technologies for this purpose.</p>
<h2 class="wp-block-heading" id="apache-kafka-and-apache-flink-for-data-contracts">Apache Kafka and Apache Flink for data contracts</h2>
<p><a href="https://www.infoworld.com/article/2334545/what-is-apache-kafka-scalable-event-streaming.html">Apache Kafka</a> and <a href="https://www.infoworld.com/article/2335162/apache-flink-101-a-guide-for-developers.html">Apache Flink</a> are popular technologies for building data pipelines and data contracts due to their scalability, wide availability, and low latency. They provide shared storage infrastructure between producers and consumers. In addition, Kafka allows producers to communicate the schemas, data types, and (implicitly) the serialization format to consumers. This shared information also allows Flink to transform data as it travels between the producer and consumer.</p>
<p><a href="https://kafka.apache.org/">Apache Kafka</a> is a distributed event streaming platform that provides high-throughput, fault-tolerance, and scalability for shared data pipelines. It functions as a distributed log enabling producers to publish data to topics that consumers can asynchronously subscribe to. In Kafka, topics have schemas, defined data types, and data quality rules. Kafka can store and process streams of records (events) in a reliable and distributed manner. Kafka is widely used for building data pipelines, streaming analytics, and event-driven architectures.</p>
<p><a href="https://flink.apache.org/">Apache Flink</a> is a distributed stream processing framework designed for high-performance, scalable, and fault-tolerant processing of real-time and batch data. Flink excels at handling large-scale data streams with low latency and high throughput, making it a popular choice for real-time analytics, event-driven applications, and data processing pipelines.</p>
<p>Flink often integrates with Kafka, using Kafka as a source or sink for streaming data. Kafka handles the ingestion and storage of event streams, while Flink processes those streams for analytics or transformations. For example, a Flink job might read events from a Kafka topic, perform aggregations, and write results back to another Kafka topic or a database.</p>
<p>Kafka supports schema versioning and can support multiple different versions of the same data contract as it evolves over time. Kafka can keep the old version running with the new version, so new clients can leverage the new schema while existing clients are using the old schema. Mechanisms like Flink’s support for materialized views help accomplish this.</p>
<h2 class="wp-block-heading" id="how-kafka-and-flink-help-implement-data-contracts">How Kafka and Flink help implement data contracts</h2>
<p>Kafka and Flink are a great way to build data contracts that meet the three requirements outlined earlier—specification, implementation, and enforcement. As <a href="https://www.infoworld.com/article/2262355/what-is-open-source-software-open-source-and-foss-explained.html">open-source</a> technologies, they play well with other data pipeline components that are often built using open source software or standards. This creates a common language and infrastructure around which data contracts can be specified, implemented, and enforced.</p>
<p>Flink can help enforce data contracts and evolve them as needed by producers and consumers, in some cases without modifying producer code. Kafka provides a common, ubiquitous language that supports specification while making implementation practical.</p>
<p>Kafka and Flink encourage reuse of the carefully crafted data products specified by data contracts. Kafka is a data storage and sharing technology that makes it easy to enable additional consumers and their pipelines to use the same data product. This is a powerful form of software reuse. Kafka and Flink can transform and shape data from one contract into a form that meets the requirements of another contract, all within the same shared infrastructure.</p>
<p>You can deploy and manage Kafka yourself, or leverage a <a href="https://www.confluent.io/lp/confluent-cloud/?utm_medium=sem&amp;utm_source=google&amp;utm_campaign=ch.sem_br.brand_tp.prs_tgt.confluent-brand_mt.xct_rgn.namer_sbrgn.unitedstates_lng.eng_dv.all_con.confluent-cloud_term.confluent-cloud&amp;utm_term=confluent%20cloud&amp;creative=&amp;device=c&amp;placement=&amp;gad_source=1&amp;gad_campaignid=2044620619&amp;gbraid=0AAAAADRv2c1UiUfJ_eHOPV5lXVY1wrgz0&amp;gclid=CjwKCAjw7fzDBhA7EiwAOqJkhyXwiBVw_0H5h-k7z-diTMIgfHKQCbuOxX7MY6gZweJzouPSILjIHxoCKW4QAvD_BwE">Kafka cloud service</a> and let others manage it for you. Any data producer or consumer can be supported by Kafka, unlike strictly commercial products that have limits on the supported producers and consumers.</p>
<p>You could get enforcement via a single database if all the data managed by your contracts sits in that database. But applications today are often built using data from many sources. For example, data streaming applications often have multiple data producers streaming data to multiple consumers. Data contracts must be enforced across these different databases, APIs, and applications.</p>
<p>You can specify a data contract at the producer end, collaborating with the producer to get the data in the form you need. But enforcement at the producer end is intrusive and complex. Each data producer has its own authentication and security mechanisms. The data contract architecture would need to be adapted to each producer. Every new producer added to the architecture would have to be accommodated. In addition, small changes to schema, metadata, and security happen continuously. With Kafka, these changes can be managed in one place.</p>
<p>Kafka sits between producers and consumers. With Kafka Schema Registry, producers and consumers have a way of communicating what is expected by their data contract. Because topics are re-usable, the data contract may be re-usable directly or it could be incrementally modified and then re-used.</p>
<div class="extendedBlock-wrapper block-coreImage undefined">
<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" src="http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-02.png" alt="Data contracts Kafka Flink 02" class="wp-image-4086149" srcset="http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-02.png?quality=50&amp;strip=all 1008w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-02.png?resize=300%2C234&amp;quality=50&amp;strip=all 300w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-02.png?resize=768%2C599&amp;quality=50&amp;strip=all 768w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-02.png?resize=894%2C697&amp;quality=50&amp;strip=all 894w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-02.png?resize=215%2C168&amp;quality=50&amp;strip=all 215w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-02.png?resize=108%2C84&amp;quality=50&amp;strip=all 108w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-02.png?resize=616%2C480&amp;quality=50&amp;strip=all 616w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-02.png?resize=462%2C360&amp;quality=50&amp;strip=all 462w, http://www.azalio.io/wp-content/uploads/2025/12/Data-contracts-Kafka-Flink-02.png?resize=321%2C250&amp;quality=50&amp;strip=all 321w" width="1008" height="786" sizes="auto, (max-width: 1008px) 100vw, 1008px"><figcaption class="wp-element-caption">
<p>Data contract enforcement in Kafka. </p>
</figcaption></figure>
<p class="imageCredit">Confluent</p>
</div>
<p>Kafka also provides shared, standardized security and data infrastructure for all data producers. Schemas can be designed, managed, and enforced at Kafka’s edge, in cooperation with the data producer. Disruptive changes to the data contract can be detected and enforced there. </p>
<p>Data contract implementation needs to be simple and built into existing tools, including <a href="https://www.infoworld.com/article/2269266/what-is-cicd-continuous-integration-and-continuous-delivery-explained.html">continuous integration and continuous delivery</a> (CI/CD). Kafka’s ubiquity, open source nature, scalability, and data re-usability make it the de facto standard for providing re-usable data products with data contracts.</p>
<h2 class="wp-block-heading" id="best-practices-for-developers-building-data-contracts">Best practices for developers building data contracts</h2>
<p>As a data engineer or developer, data contracts can help you deliver better software and user experiences at a lower cost. Here are a few guidelines for best practices as you start leveraging data contracts for your pipelines and data products.</p>
<ol start="1" class="wp-block-list">
<li>Standardize schema formats: Use Avro or Protobuf for Kafka due to their strong typing and compatibility features. JSON Schema is a suitable alternative but less efficient. </li>
<li>Automate validation: Use CI/CD pipelines to validate schema changes against compatibility rules before deployment. Make sure your code for configuring, initializing, and changing Kafka topic schemas is part of your CI/CD workflows and check-ins. </li>
<li>Version incrementally: Use semantic versioning (e.g., v1.0.0, v1.1.0) for schemas and document changes. This should be part of your CI/CD workflows and run-time checks for compatibility.</li>
<li>Monitor and alert: Set up alerts for schema and type violations or data quality issues in Kafka topics or Flink jobs.</li>
<li>Collaborate across teams: Ensure producers and consumers (e.g., different teams’ Flink jobs) agree on the contract up front to avoid mismatches. Leverage collaboration tools (preferably graphical) that allow developers, business analysts, and data engineers to jointly define, refine, and evolve the contract specifications.</li>
<li>Test schema evolution: Simulate schema changes in a staging environment to verify compatibility with Kafka topics and Flink jobs.</li>
</ol>
<ol start="5" class="wp-block-list"></ol>
<ol start="6" class="wp-block-list"></ol>
<p>You can find out more on how to develop data contracts with Kafka <a href="https://docs.confluent.io/platform/current/schema-registry/fundamentals/data-contracts.html">here</a>.</p>
<h2 class="wp-block-heading" id="key-capabilities-for-data-contracts">Key capabilities for data contracts</h2>
<p>Kafka and Flink provide a common language to define schemas, data types, and data quality rules. This common language is shared and understood by developers. It can be independent of the particular data producer or consumer.</p>
<p>Kafka and Flink have critical capabilities to make data contracts practical and widespread in your organization:</p>
<ul class="wp-block-list">
<li>Broad support for potential data producers and consumers</li>
<li>Widespread adoption, usage, and understanding, partly due to their open source origins</li>
<li>Many implementations available, including on-prem, cloud-native, and BYOC (Bring Your Own Cloud)</li>
<li>The ability to operate at both small and large scales</li>
<li>Mechanisms to modify data contracts and their schemas as they evolve</li>
<li>Sophisticated mechanisms for evolving schemas and reusing data contracts when joining multiple streams, each with its own data contract.</li>
</ul>
<p>Data contracts require a new culture and mindset that encourage data producers to collaborate with data consumers. Consumers need to design and describe their schema and other data pipeline requirements in collaboration with producers, and guided by developers and data architects.</p>
<p>Kafka and Flink make it much easier to specify, implement, and enforce the data contracts your collaborative producers and consumers develop. Use them to get your data pipelines up and running faster, operating more efficiently, without downtime, while delivering more value to the business.</p>
<p><em>—</em></p>
<p><a href="https://www.infoworld.com/blogs/new-tech-forum"><strong><em>New Tech Forum</em></strong></a><em><strong> provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all </strong></em><em><strong>inquiries to </strong></em><a href="mailto:doug_dineley@foundryco.com"><strong><em>doug_dineley@foundryco.com</em></strong></a><em><strong>.</strong></em></p>
</div>
</div>
</div>
</div>
</div><p>The post <a href="https://www.azalio.io/why-data-contracts-need-apache-kafka-and-apache-flink/">Why data contracts need Apache Kafka and Apache Flink</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Spam flooding npm registry with token stealers still isn’t under control</title>
		<link>https://www.azalio.io/spam-flooding-npm-registry-with-token-stealers-still-isnt-under-control/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Mon, 17 Nov 2025 06:58:55 +0000</pubDate>
				<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://www.azalio.io/spam-flooding-npm-registry-with-token-stealers-still-isnt-under-control/</guid>

					<description><![CDATA[<p>A coordinated token farming campaign continues to flood the open source npm registry, with tens of thousands of infected packages created almost daily to steal tokens from unsuspecting developers using the Tea Protocol to reward coding work. On Thursday, researchers at Amazon said there were over 150,000 packages in the campaign. But in an interview [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/spam-flooding-npm-registry-with-token-stealers-still-isnt-under-control/">Spam flooding npm registry with token stealers still isn’t under control</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<div id="remove_no_follow">
<div class="grid grid--cols-10@md grid--cols-8@lg article-column">
<div class="col-12 col-10@md col-6@lg col-start-3@lg">
<div class="article-column__content">
<section class="wp-block-bigbite-multi-title">
<div class="container"></div>
</section>
<p>A coordinated token farming campaign continues to flood the open source npm registry, with tens of thousands of infected packages created almost daily to steal tokens from unsuspecting developers using the Tea Protocol to reward coding work.</p>
<p>On Thursday, <a href="https://aws.amazon.com/blogs/security/amazon-inspector-detects-over-150000-malicious-packages-linked-to-token-farming-campaign/">researchers at Amazon said</a> there were over 150,000 packages in the campaign. But in an interview on Friday, an executive at software supply chain management provider Sonatype, which wrote about the campaign in April 2024, told <em>CSO</em> that number has now grown to 153,000.</p>
<p>And while this payload merely steals tokens, other threat actors are paying attention, said Sonatype CTO <a href="https://www.sonatype.com/company" target="_blank" rel="noreferrer noopener">Brian Fox</a>.</p>
<p><a href="https://www.sonatype.com/blog/devs-flood-npm-with-10000-packages-to-reward-themselves-with-tea-tokens">When Sonatype wrote about the campaign just over a year ago</a>, it found a mere 15,000 packages that appeared to come from a single person.</p>
<p>With the swollen numbers reported this week, Amazon researchers wrote that it’s “one of the largest package flooding incidents in open source registry history, and represents a defining moment in supply chain security.”</p>
<p>This campaign is just the latest way threat actors are taking advantage of security holes in a number of open source repositories, which runs the risk of damaging the reputation of sites like npm, PyPI and others.</p>
<p><a href="https://www.csoonline.com/article/4081492/modern-supply-chain-attacks-and-their-real-world-impact.html"><strong>Related content: Supply chain attacks and their consequences</strong></a></p>
<p>“The malware infestation in open-source repositories is a full-blown crisis, out of control and dangerously eroding trust in the open-source upstream supply chain,” said <a href="https://www.linkedin.com/in/draidman/" target="_blank" rel="noreferrer noopener">Dmitry Raidman</a>, CTO of Cybeats, which makes a software bill of materials solution.</p>
<p>As evidence, he pointed to t<a href="https://www.cisa.gov/news-events/alerts/2025/09/23/widespread-supply-chain-compromise-impacting-npm-ecosystem">he Shai‑Hulud worm’s rapid exploitation</a> of the npm ecosystem, which shows how quickly attackers can hijack developer tokens, corrupt packages, and propagate laterally across the entire dependency ecosystem. “What began as a single compromise explodes in a few hours, leaving the whole ecosystem and every downstream project in the industry at risk in a matter of days, regardless of whether it is open source or commercial.”</p>
<p>This past September, Raidman <a href="https://www.cybeats.com/blog/the-alarming-acceleration-of-supply-chain-attacks-from-nx-to-qix-in-just-13-days">wrote about the compromise of the Nx build system</a> after threat actors pushed malicious versions of the package into npm. Within hours, he wrote, developers around the world were unknowingly pulling in code that stole SSH keys, authentication tokens, and cryptocurrency wallets.</p>
<p>These and more recent large scale uploads of malicious packages into open source repositories are “just the beginning,” he warned, unless developers and repository maintainers improve security.</p>
<p>The Amazon and Sonatype reports aren’t the first to detect this campaign. Australian researcher <a href="https://melbourne2024.cyberconference.com.au/speakers/paul-mccarty-ebbdx" target="_blank" rel="noreferrer noopener">Paul McCarty</a> of SourceCodeRed confirmed to us this is the spam he dubbed ‘IndonesianFoods’ <a href="https://sourcecodered.com/indonesianfoods-npm-worm/">in a blog this week.</a></p>
<h2 class="wp-block-heading" id="the-tea-protocol">The Tea Protocol</h2>
<p>The Tea Protocol is a blockchain-based platform that gives open-source developers and package maintainers tokens called Tea as rewards for their software work. These tokens are also supposed to help secure the software supply chain and enable decentralized governance across the network, <a href="https://tea.xyz/">say its creators on their website.</a></p>
<p>Developers put Tea code that links to the blockchain in their apps; the more an app is downloaded, the more Tea tokens they get, which can then be cashed in through a fund. The spam scheme is an attempt to make the blockchain think apps created by the threat actors are highly popular and therefore earn a lot of tokens.</p>
<p>At the moment, the tokens have no value. But it is suspected that the threat actors are positioning themselves to receive real cryptocurrency tokens when the Tea Protocol launches its Mainnet, where Tea tokens will have actual monetary value and can be traded.</p>
<p>For now, says Sonatype’s Fox, the scheme wastes the time of npm administrators, who are trying to expel over 100,000 packages. But Fox and Amazon point out the scheme could inspire others to take advantage of other reward-based systems for financial gain, or to deliver malware.</p>
<h2 class="wp-block-heading" id="what-it-leaders-and-developers-should-do">What IT leaders and developers should do</h2>
<p>To lower the odds of abuse, open source repositories should tighten their access control, limiting the number of users who can upload code, said Raidman of Cybeats. That includes the use of multi-factor authentication in case login credentials of developers are stolen, he said, and adding digital signing capabilities to uploaded code to authenticate the author.</p>
<p>IT leaders should insist all code their firm uses has a software bill of materials (SBOM), so security teams can see the components. They also need to insist developers know the versions of the open source code they include in their apps, and confirm only approved and safe versions are being used and not automatically changed just because a new version is downloaded from a repository.</p>
<p>Sonatype’s Fox said IT leaders need to buy tools that can intercept and block malicious downloads from repositories. Antivirus software is useless here, he said, because malicious code uploaded to repositories won’t contain the signatures that AV tools are supposed to detect.</p>
<p>In response to emailed questions, the authors of the Amazon blog, researchers Chi Tran and Charlie Bacon, said open source repositories need to deploy advanced detection systems to identify suspicious patterns like malicious configuration files, minimal or cloned code, predictable code naming schemes and circular dependency chains.</p>
<p>“Equally important,” they add, “is monitoring package publishing velocity, since automated tools create at speeds no human developer could match. In addition, enhanced author validation and accountability measures are crucial for prevention. This includes implementing stronger identity verification for new accounts, monitoring for coordinated publishing activity across multiple developer accounts, as seen in this campaign, and applying ‘guilt by association’ principles where packages from accounts linked to malicious activity receive heightened scrutiny. Repositories should also track behavioral patterns like rapid account creation followed by mass package publishing, which are hallmarks of automated abuse.”</p>
<p>CISOs discovering these packages in their environments “face an uncomfortable reality,” the Amazon authors add: “Their current security controls had failed to detect a coordinated supply chain attack.”</p>
<p>SourceCodeRed’s McCarty said IT leaders need to protect developers’ laptops, as well as their automated continuous integration and delivery pipelines (CI/CD). Traditional security tools like EDR and SCA don’t scan for malware, he warned. “The number of people that buy Snyk thinking it does this is huge,” he said. </p>
<p>McCarty has created two open source malware scanning tools. One, <a href="https://opensourcemalware.com/">opensourcemalware.com</a>, is an open database of malicious content like npm packages. It can be checked to see if a package being used is malicious. The second is the automated open-source <a href="https://github.com/6mile/MALOSS">MALOSS</a> tool, which is effectively a scanner that checks opensourcemalware.com and other sources automatically. MALOSS can be used in a CI/CD pipeline or on a local workstation.</p>
<p>He also recommends the use of a commercial or open source package firewall, which effectively allows a developer to only install approved packages. </p>
<p>“The enterprise has more options than I think they realize,” he told CSO. “They just often don’t realize that there are tools and solutions to address this risk.  Maturity is really low in this space.”</p>
</div>
</div>
</div>
</div>
</div><p>The post <a href="https://www.azalio.io/spam-flooding-npm-registry-with-token-stealers-still-isnt-under-control/">Spam flooding npm registry with token stealers still isn’t under control</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Google touts Python client library for Data Commons</title>
		<link>https://www.azalio.io/google-touts-python-client-library-for-data-commons/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Sat, 28 Jun 2025 18:56:38 +0000</pubDate>
				<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://www.azalio.io/google-touts-python-client-library-for-data-commons/</guid>

					<description><![CDATA[<p>Google has released version 2 of its Python client library to query the Data Commons platform, which organizes the world’s publicly available statistical data. The library supports custom instances, among other capabilities. Announced June 26, the Data Commons Python library can be used to explore the Data Common knowledge graph and retrieve statistical data from [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/google-touts-python-client-library-for-data-commons/">Google touts Python client library for Data Commons</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<div id="remove_no_follow">
<div class="grid grid--cols-10@md grid--cols-8@lg article-column">
<div class="col-12 col-10@md col-6@lg col-start-3@lg">
<div class="article-column__content">
<section class="wp-block-bigbite-multi-title">
<div class="container"></div>
</section>
<p>Google has released version 2 of its <a href="https://www.infoworld.com/article/2253770/what-is-python-powerful-intuitive-programming.html">Python</a> client library to query the <a href="https://datacommons.org/">Data Commons</a> platform, which organizes the world’s publicly available statistical data. The library supports custom instances, among other capabilities.</p>
<p>Announced <a href="https://developers.googleblog.com/en/pythondatacommons/">June 26</a>, the Data Commons Python library can be used to explore the Data Common knowledge graph and retrieve statistical data from more than 200 datasets. Available domains include demographics, economy, education, environment, energy, health, and housing. Developers can use the library’s <a href="https://datacommons.org/build">custom instances</a> to programmatically query any public or private instance, whether hosted locally or on the Google Cloud Platform. Developers can use custom instances to seamlessly integrate proprietary datasets with the Data Commons knowledge graph, according to Google.</p>
<p><a href="https://github.com/datacommonsorg/api-python/tree/master/datacommons_client">Based on the V2 REST API</a>, the Data Commons Python library in version 2 supports <a href="https://pandas.pydata.org/">Pandas</a> dataframe APIs as an integral module, with a single installation package allowing seamless use with other API endpoints in the same client, Google says. API key management and other stateful operations are built into the client class. Integration with<a href="https://docs.pydantic.dev/latest/"> Pydantic</a> libraries improves type safety, serialization, and validation. Additionally, multiple response formats are supported, including JSON and Python dictionaries. With the library, developers can map entities from other datasets to entities in Data Commons. The Data Commons Python API client library is <a href="https://github.com/datacommonsorg/api-python/tree/master/datacommons_client" data-type="link" data-id="https://github.com/datacommonsorg/api-python/tree/master/datacommons_client">hosted on GitHub</a> and available on <a href="https://pypi.org/project/datacommons-client/">pypi.org</a>.</p>
</div>
</div>
</div>
</div>
</div><p>The post <a href="https://www.azalio.io/google-touts-python-client-library-for-data-commons/">Google touts Python client library for Data Commons</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>.NET 10 Preview 5 highlights C# 14, runtime improvements</title>
		<link>https://www.azalio.io/net-10-preview-5-highlights-c-14-runtime-improvements/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Thu, 12 Jun 2025 20:56:22 +0000</pubDate>
				<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://www.azalio.io/net-10-preview-5-highlights-c-14-runtime-improvements/</guid>

					<description><![CDATA[<p>Microsoft has launched the fifth preview of its planned .NET 10 open source developer platform. The preview release fits C# 14 with user-defined compound assignment operators and enhances the .NET runtime with escape analysis, among other updates. Announced June 10, .NET 10 Preview 5 can be downloaded from dotnet.microsoft.com. It includes enhancements to features ranging [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/net-10-preview-5-highlights-c-14-runtime-improvements/">.NET 10 Preview 5 highlights C# 14, runtime improvements</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<div id="remove_no_follow">
<div class="grid grid--cols-10@md grid--cols-8@lg article-column">
<div class="col-12 col-10@md col-6@lg col-start-3@lg">
<div class="article-column__content">
<section class="wp-block-bigbite-multi-title">
<div class="container"></div>
</section>
<p>Microsoft has launched the fifth preview of its planned .NET 10 open source developer platform. The preview release fits C# 14 with user-defined compound assignment operators and enhances the <a href="https://www.infoworld.com/article/2264488/what-is-the-net-framework-microsofts-answer-to-java.html">.NET</a> runtime with escape analysis, among other updates.</p>
<p>Announced <a href="https://devblogs.microsoft.com/dotnet/dotnet-10-preview-5/">June 10</a>, .NET 10 Preview 5 can be downloaded from <a href="https://dotnet.microsoft.com/en-us/download/dotnet/10.0">dotnet.microsoft.com</a>. It includes enhancements to features ranging from the runtime and C# 14 to F# 10, NET MAUI, ASP.NET Core, and Blazor. </p>
<p>C# 14 type authors can now implement compound assignment operators in a user-defined manner that modifies the target in place rather than creating copies. Pre-existing code is unchanged and works the same as before. Meanwhile, in the .NET runtime, the JIT compiler’s escape analysis implementation has been extended to model delegate invokes. When compiling source code to IL (intermediate language), each delegate is transformed into a closure class with a method corresponding to the delegate’s definition and fields matching any captured variables. At runtime, a closure object is created to instantiate the captured variables along with a <code>Func</code> object to invoke the delegate. This runtime preview also enhances the JIT’s inlining policy to take better advantage of profile data. Additionally, F# 10 introduces scoped warning controls with a new <code>#warnon</code> directive supporting fine-grained control over compiler diagnostics.</p>
<p>A production release of .NET 10 is expected this November. .NET 10 Preview 5 follows <a href="https://www.infoworld.com/article/3985923/net-10-preview-4-enhances-zip-processing-jit-compilation-blazor-webassembly.html">Preview 4</a>, announced May 13. The <a href="https://www.infoworld.com/article/3834128/microsofts-net-10-arrives-in-first-preview.html">first preview</a> was unveiled February 25, followed by <a href="https://www.infoworld.com/article/3850625/microsoft-net-10-preview-2-shines-on-c-runtime-encryption.html">a second preview</a> on March 18, and <a href="https://www.infoworld.com/article/3960731/net-10-preview-3-bolsters-standard-library-c-webassembly.html">the third preview</a>, announced April 10. Other improvements featured in Preview 5 include:</p>
<ul class="wp-block-list">
<li>For ASP.NET Core, developers now can specify a custom security descriptor for <code>HTTP.sys</code> request queues using a new <code>RequestQueueSecurityDescriptor</code> property on <code>HttpSysOptions</code>. This enables more granular control over access rights for the request queue, allowing developers to tailor security to an application’s needs.</li>
<li>The OpenAPI.NET library used in ASP.NET Core OpenAPI document generation has been upgraded to <a href="https://github.com/microsoft/OpenAPI.NET/releases/tag/v2.0.0-preview.18">v2.0.0-preview18</a>.</li>
<li>Blazor now provides an improved way to display a “not Found” page when navigating to a non-existent page. Developers can specify a page to render when <code>NavigationManager.NotFound()</code> is called by passing a page type to the <code>Router</code> component using the <code>NotFoundPage</code> parameter.</li>
<li>For .NET MAUI, projects now can combine XML namespaces into a new global namespace, <code>xmlns="http://schemas.microsoft.com/dotnet/maui/global"</code>, and use these without prefixes.</li>
<li>For Windows Presentation Foundation, the release introduces a shorthand syntax for defining <code>Grid.RowDefinitions</code> and <code>Grid.ColumnDefinitions</code> in XAML, with support for XAML Hot Reload. Performance and code quality are also improved.</li>
</ul>
</div>
</div>
</div>
</div>
</div><p>The post <a href="https://www.azalio.io/net-10-preview-5-highlights-c-14-runtime-improvements/">.NET 10 Preview 5 highlights C# 14, runtime improvements</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Beware these 10 malicious VS Code extensions</title>
		<link>https://www.azalio.io/beware-these-10-malicious-vs-code-extensions/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Tue, 08 Apr 2025 16:02:10 +0000</pubDate>
				<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://www.azalio.io/beware-these-10-malicious-vs-code-extensions/</guid>

					<description><![CDATA[<p>Developers using Microsoft’s Visual Studio Code (VSCode) editor are being warned to delete, or at least stay away from, 10 newly published extensions which will trigger the installation of a cryptominer.  The warning comes from researchers at Extension Total, who said possibly as many as 1 million of these malicious extensions, which pretend to be popular [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/beware-these-10-malicious-vs-code-extensions/">Beware these 10 malicious VS Code extensions</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<div id="remove_no_follow">
<div class="grid grid--cols-10@md grid--cols-8@lg article-column">
<div class="col-12 col-10@md col-6@lg col-start-3@lg">
<div class="article-column__content">
<section class="wp-block-bigbite-multi-title">
<div class="container"></div>
</section>
<p>Developers using Microsoft’s Visual Studio Code (VSCode) editor are being warned to delete, or at least stay away from, 10 newly published extensions which will trigger the installation of a cryptominer.</p>
<p> The warning <a href="https://blog.extensiontotal.com/mining-in-plain-sight-the-vs-code-extension-cryptojacking-campaign-19ca12904b59" target="_blank" rel="noreferrer noopener">comes from researchers at Extension Total</a>, who said possibly as many as 1 million of these malicious extensions, which pretend to be popular development tools, may have been installed since April 4, when they were published on Microsoft’s Visual Studio Code Marketplace. However, the researchers also suspect the threat actors may have inflated the download numbers.</p>
<p><a href="https://www.csoonline.com/article/3956464/warning-to-developers-stay-away-from-these-10-vscode-extensions.html">Continue reading on CSO.</a></p>
</div>
</div>
</div>
</div>
</div><p>The post <a href="https://www.azalio.io/beware-these-10-malicious-vs-code-extensions/">Beware these 10 malicious VS Code extensions</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Microsoft’s new DocumentDB builds on PostgreSQL</title>
		<link>https://www.azalio.io/microsofts-new-documentdb-builds-on-postgresql/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Thu, 30 Jan 2025 16:59:30 +0000</pubDate>
				<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://www.azalio.io/microsofts-new-documentdb-builds-on-postgresql/</guid>

					<description><![CDATA[<p>Microsoft’s recent launch of a standalone version of the MongoDB compatibility layer for its global-scale Azure Cosmos DB brought back an old name. Back in 2018, when the company unveiled a public version of the Project Florence database engine that powers much of Azure, they called it DocumentDB. That original name worked well for some [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/microsofts-new-documentdb-builds-on-postgresql/">Microsoft’s new DocumentDB builds on PostgreSQL</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<div id="remove_no_follow">
<div class="grid grid--cols-10@md grid--cols-8@lg article-column">
<div class="col-12 col-10@md col-6@lg col-start-3@lg">
<div class="article-column__content">
<section class="wp-block-bigbite-multi-title">
<div class="container"></div>
</section>
<h1 class="wp-block-heading">Microsoft’s recent launch of a standalone version of the <a href="https://www.infoworld.com/article/3623357/what-is-mongodb-a-quick-guide-for-developers.html">MongoDB</a> compatibility layer for its global-scale Azure Cosmos DB brought back an old name. Back in 2018, when the company unveiled a public version of the <a href="https://techcommunity.microsoft.com/blog/mvp-blog/build-your-first-planet-scale-app-with-azure-cosmos-db/428809">Project Florence database engine</a> that powers much of Azure, they <a href="https://www.infoworld.com/article/2251030/get-the-most-out-of-azures-global-documentdb.html">called it DocumentDB</a>. That original name worked well for some of the database’s personalities, but its support for much more than <a href="https://www.infoworld.com/article/3222851/what-is-json-a-better-format-for-data-exchange.html">JSON</a> documents soon led to a new, now more familiar name. Cosmos DB has continued to evolve, with its document database capabilities offering a familiar set of MongoDB-compatible <a href="https://www.infoworld.com/article/3269878/what-is-an-api-application-programming-interfaces-explained.html">APIs</a>.</h1>
<p>A recent set of updates introduced <a href="https://www.infoworld.com/article/2338506/azure-cosmos-db-joins-the-ai-toolchain.html">the vCore variant of Azure Cosmos DB</a>, which moves from the multi-tenant, cross-region, transparently scalable resource unit-based Cosmos DB to an alternative architecture that behaves more like traditional Azure services, with defined host virtual machines and a more predictable pricing model. The vCore-based MongoDB APIs are the same as those used with the cloud-scale resource unit version, but the underlying technologies are quite different, and moving from one version to the other requires a complete migration of your data.</p>
<p>Last week Microsoft revealed the differences in the two implementations when it unveiled an open-source release of the vCore Cosmos DB engine. Built on the familiar PostgreSQL platform, the new public project adds <a href="https://www.infoworld.com/article/3240644/what-is-nosql-databases-for-a-cloud-scale-future.html">NoSQL</a> features with the MongoDB APIs. As it focuses purely on storing JSON content, <a href="https://opensource.microsoft.com/blog/2025/01/23/documentdb-open-source-announcement/">Microsoft decided to bring back the original DocumentDB name</a>.</p>
<p>The new DocumentDB comes with a permissive MIT license and is intended to provide a standard NoSQL environment for your data to reduce the complexity associated with migrating from one platform to another. Choosing to work with PostgreSQL is part of that, as it has long been a popular platform for developers, one that’s had something of a recent renaissance.</p>
<h2 class="wp-block-heading" id="a-modern-nosql-database-with-postgresql-roots">A modern NoSQL database with PostgreSQL roots</h2>
<p>By open sourcing a tool that’s already widely used in Azure, Microsoft is giving developers the ability to run something that’s already proven to work well. Most of the <a href="https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/faq">features we expect to find in a modern NoSQL store</a> are already there, from basic CRUD (create, read, update, delete) operations to more complex vector search tools and the indexes needed to support them. This ensures you will be able to build on and extend a database that can support most scenarios.</p>
<p>DocumentDB sits on top of the existing PostgreSQL platform, which manages storage, indexing, and other key low-level operations. The result is that DocumentDB is implemented using two components: one to add support for <a href="https://en.wikipedia.org/wiki/BSON">BSON (Binary JavaScript Object Notation</a>) data types and one to support the DocumentDB APIs, adding CRUD operations, queries, and index management.</p>
<p>BSON is the fundamental data type used in MongoDB but with implementations in most common languages. If you’re going to build a common NoSQL store based on MongoDB APIs, then BSON will be the way you represent your standard NoSQL data structures, such as key-value pairs and arrays. It’s easy to build JSON documents, but using BSON allows you to store and search content more effectively.</p>
<p>You can think of DocumentDB as a stack. At the bottom is PostgreSQL itself, then the DocumentDB extension that gives the database the ability to work with BSON data. Once installed it lets you parse BSON data and then use the PostgreSQL engine to build indexes, not only using the database engine’s standard tools but also other extensions. The result is the ability to deliver complex indexes that support all kinds of queries.</p>
<p>One useful feature is the ability to use PostgreSQL’s vector index capabilities to build your BSON data into a <a href="https://www.infoworld.com/article/3712227/what-is-rag-more-accurate-and-reliable-llms.html">retrieval-augmented generation</a> (RAG) application or use nearest-neighbor searches to build recommendation engines or identify fraud patterns. There’s a lot of utility in a NoSQL database with many different indexing options; it gives you the necessary foundations for many different application types—all working on the same data set.</p>
<h2 class="wp-block-heading" id="getting-started-with-documentdb">Getting started with DocumentDB</h2>
<p>This first public release of DocumentDB inherits code already running in Azure, so it’s ready to build and use, <a href="https://github.com/microsoft/documentdb">hosted on GitHub</a>. The instructions in the project wiki are focused on using <a href="https://www.infoworld.com/article/3666488/what-is-visual-studio-code-microsofts-extensible-code-editor.html">VS Code</a> and <a href="https://www.infoworld.com/article/3204171/what-is-docker-the-spark-for-the-container-revolution.html">Docker</a> to build on top of WSL 2.0, though you can use any <a href="https://www.infoworld.com/article/3711723/thirty-two-years-of-linux-and-its-community.html">Linux</a> via VS Code’s remote engine. You build the container, then make, install, and launch the binaries. The DocumentDB container already holds PostgreSQL, so once setup is complete, you can connect to its shell and start experimenting with BSON support.</p>
<p>From the shell, you can embed API calls in select statements. This allows you to experiment with operations before adding them to calls from your code. The shell lets you build collections, add items, and experiment with CRUD operations. Other operations apply filters and support queries, as well as building indexes across one or more fields in a collection. You can find a lengthy list of documented API functions in the project wiki, grouped into common sets of operations.</p>
<p>For now, <a href="https://github.com/microsoft/documentdb/wiki/Functions#crud-functions">the GitHub wiki is the main source of documentation for DocumentDB</a>. It’s a little on the thin side and could do with more examples. However, DocumentDB is currently intended for developers who want an alternative to MongoDB, one that’s available with an open source license rather than a source-available license. For now, as there’s no SDK, you’ll need to build your own calls to the API. These are based on MongoDB, so porting applications shouldn’t be too complex.</p>
<h2 class="wp-block-heading" id="why-this-why-now">Why this? Why now?</h2>
<p>The reasoning behind the DocumentDB project seems to be the big ambition to deliver a standard NoSQL API and engine, much like that developed for SQL. Microsoft has a lot of experience working in standards bodies, especially building and delivering the essential tests needed to make sure that any implementation of the resulting standard meets the necessary requirements.</p>
<p>We’ve seen Microsoft deliver extensive test suites for protocols and languages, and we can expect this level of tooling to be a key component of any future NoSQL standard. We need common APIs and engine features to help with application and data portability. A common standard will allow NoSQL stores to compete on performance and other business-essential features such as scalability and resilience.</p>
<p>DocumentDB’s layered approach to delivering basic functionality is perhaps the most important part of what Microsoft is doing here. The <a href="https://opensource.microsoft.com/blog/2025/01/23/documentdb-open-source-announcement/">blog post announcing DocumentDB</a> talks about “a protocol translation layer” on top of the BSON extension, bridging APIs to the document store in a way that makes it possible to have a single store that looks like MongoDB to one set of clients, Aerospike to another, or CouchDB, Couchbase, and more.</p>
<h2 class="wp-block-heading" id="a-reference-for-a-nosql-standard">A reference for a NoSQL standard</h2>
<p>For DocumentDB to be the foundation of a NoSQL standard, it has to be vendor-neutral. By allowing you to switch protocols on top of the same underlying store, you can use the APIs you’re familiar with, no matter their source. Query engine designers can focus on their area of expertise, while the PostgreSQL team can continue to deliver the resilient, high-performance database necessary for modern applications.</p>
<p>One example of this is the latest release of the open source <a href="https://blog.ferretdb.io/">FerretDB NoSQL database</a>. The latest release, FerretDB 2.0, is built using DocumentDB and is getting a considerable performance increase. The FerretDB team can continue to work on its own features, taking advantage of the open source DocumentDB to provide the core BSON support necessary for a MongoDB-compatible NoSQL database. The FerretDB team claims up to 20x better performance. It will continue to use its own Apache 2.0 license in parallel with Microsoft’s MIT license.</p>
<p>Another interesting point shows how much Microsoft has changed in the past decade or so: The first product shipping on the standalone DocumentDB is coming from Ferret, an open source company that’s not Microsoft.</p>
<p>DocumentDB is a project to keep an eye on, especially when Microsoft starts the process of using it as a reference implementation for a new NoSQL standard. With community support, hopefully we’ll then see a rapid rollout of the MongoDB API features that are currently missing—adding them into both the middleware layer to map them to PostgreSQL operations and the API implementation.</p>
</div>
</div>
</div>
</div>
</div><p>The post <a href="https://www.azalio.io/microsofts-new-documentdb-builds-on-postgresql/">Microsoft’s new DocumentDB builds on PostgreSQL</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Vue JavaScript framework boosts reactivity system</title>
		<link>https://www.azalio.io/vue-javascript-framework-boosts-reactivity-system/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Thu, 05 Sep 2024 23:18:42 +0000</pubDate>
				<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://52.66.252.192/vue-javascript-framework-boosts-reactivity-system/</guid>

					<description><![CDATA[<p>Vue 3.5, an update to the popular “progressive” JavaScript framework, emphasizes improvements to the platform’s reactivity system, for better performance and improved memory usage. Vue 3.5, described as a minor release with no breaking changes, was announced September 1. However, the release includes a major refactor of the reactivity system that boosts performance and significantly [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/vue-javascript-framework-boosts-reactivity-system/">Vue JavaScript framework boosts reactivity system</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<div id="remove_no_follow">
<div class="grid grid--cols-10@md grid--cols-8@lg article-column">
<div class="col-12 col-10@md col-6@lg col-start-3@lg">
<div class="article-column__content">
<section class="wp-block-bigbite-multi-title">
<div class="container"></div>
</section>
<p>Vue 3.5, an update to the popular “progressive” <a href="https://www.infoworld.com/article/3441178/what-is-javascript-the-full-stack-programming-language.html">JavaScript</a> framework, emphasizes improvements to the platform’s reactivity system, for better performance and improved memory usage.</p>
<p>Vue 3.5, described as a minor release with no breaking changes, was announced <a href="https://blog.vuejs.org/posts/vue-3-5#reactivity-system-optimizations">September 1</a>. However, the release includes a major refactor of the reactivity system that boosts performance and significantly improves memory usage (-56%) with no behavior changes, Vue creator Evan You wrote in a <a href="https://blog.vuejs.org/posts/vue-3-5#reactivity-system-optimizations">blog post</a>. </p>
<p>The Vue 3.5 release also resolves stale computed values and hanging memory issues caused by hanging computes during SSR (server-side rendering). Additionally, reactivity tracking has been optimized for large, deeply reactive arrays, making these operations as much as 10x faster in some cases. Reactive props destructure, meanwhile, has been stabilized and now is enabled by default. Variables destructured from a <code>defineProps</code> call in <code><script setup></script></code> now are reactive. This simplifies declaring props with default values, You said. </p>
<p>For SSR in Vue 3.5, async components now can control when they should be hydrated, by specifying a strategy through the hydrate option of the <code>defineAsyncComponent()</code> API. Vue 3.5 also fixes longstanding issues pertaining to the <code>defineCustomElement()</code> API, adding new capabilities for authoring custom elements. </p>
<p>Other features in Vue 3.5 include a new way of obtaining <a href="https://vuejs.org/guide/essentials/template-refs.html">Template Refs</a> via the <code>useTemplateRef()</code> API  and the introduction of a <code>defer</code> prop for <code><teleport></teleport></code>, which mounts it after the current render cycle. And Vue 3.5 introduces a globally imported API, <a href="https://vuejs.org/api/reactivity-core#onwatchercleanup">onWatcherCleanup()</a>, to register cleanup callbacks in watchers. </p>
<p>Vue 3.5 has been followed up this week with versions <a href="https://github.com/vuejs/core/compare/v3.5.0...v3.5.1">3.5.1</a>, with bug fixes, and <a href="https://github.com/vuejs/core/blob/main/CHANGELOG.md">3.5.2</a>, with a <a href="https://github.com/vuejs/core/pull/11819">compiler-core feature</a> to parse modifiers as an expression to provide location data.</p>
</div>
</div>
</div>
</div>
</div><p>The post <a href="https://www.azalio.io/vue-javascript-framework-boosts-reactivity-system/">Vue JavaScript framework boosts reactivity system</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>FTC’s non-compete ban almost certainly dead, based on a Texas federal court decision</title>
		<link>https://www.azalio.io/ftcs-non-compete-ban-almost-certainly-dead-based-on-a-texas-federal-court-decision/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Mon, 08 Jul 2024 18:58:35 +0000</pubDate>
				<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://www.azalio.io/ftcs-non-compete-ban-almost-certainly-dead-based-on-a-texas-federal-court-decision/</guid>

					<description><![CDATA[<p>In a highly-anticipated federal ruling on July 3, US District Court Judge Ada Brown determined that the US Federal Trade Commission (FTC) did not have the authority to issue a nationwide ban of non-compete agreements. Although the judge’s decision was preliminary, employment lawyers watching the case agree that the FTC non-compete move is effectively dead. [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/ftcs-non-compete-ban-almost-certainly-dead-based-on-a-texas-federal-court-decision/">FTC’s non-compete ban almost certainly dead, based on a Texas federal court decision</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<article>
<section class="page">
<p>In a highly-anticipated federal ruling on July 3, US District Court Judge Ada Brown determined that the US Federal Trade Commission (FTC) did not have the authority to issue a nationwide ban of non-compete agreements. Although the judge’s decision was preliminary, employment lawyers watching the case agree that the FTC non-compete move is effectively dead.</p>
<p>Brown, of the US District Court for the Northern District of Texas, said that she would issue a final ruling on Aug. 30, the day before the FTC ban was slated to take effect. But based on the strong wording of her preliminary decision, there seemed little doubt that she would ultimately block the ban. </p>
<p class="jumpTag"><a href="https://www.infoworld.com/article/3715606/ftc-s-non-compete-ban-almost-certainly-dead-based-on-a-texas-federal-court-decision.html#jump">To read this article in full, please click here</a></p>
</section>
</article>
</div><p>The post <a href="https://www.azalio.io/ftcs-non-compete-ban-almost-certainly-dead-based-on-a-texas-federal-court-decision/">FTC’s non-compete ban almost certainly dead, based on a Texas federal court decision</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>How evolving AI regulations impact cybersecurity</title>
		<link>https://www.azalio.io/how-evolving-ai-regulations-impact-cybersecurity/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Tue, 02 Jul 2024 09:58:45 +0000</pubDate>
				<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://www.azalio.io/how-evolving-ai-regulations-impact-cybersecurity/</guid>

					<description><![CDATA[<p>While their business and tech colleagues are busy experimenting and developing new applications, cybersecurity leaders are looking for ways to anticipate and counter new, AI-driven threats. It’s always been clear that AI impacts cybersecurity, but it’s a two-way street. Where AI is increasingly being used to predict and mitigate attacks, these applications are themselves vulnerable. [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/how-evolving-ai-regulations-impact-cybersecurity/">How evolving AI regulations impact cybersecurity</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<article>
<section class="page">
<p>While their business and tech colleagues are busy experimenting and developing new applications, cybersecurity leaders are looking for ways to anticipate and counter new, AI-driven threats.</p>
<p>It’s always been clear that AI impacts cybersecurity, but it’s a two-way street. Where AI is increasingly being used to predict and mitigate attacks, these applications are themselves vulnerable. The same automation, scale, and speed everyone’s excited about are also available to cybercriminals and threat actors. Although far from mainstream yet, malicious use of AI has been growing. From generative adversarial networks to massive botnets and automated DDoS attacks, the potential is there for a new breed of cyberattack that can adapt and learn to evade detection and mitigation.</p>
<p class="jumpTag"><a href="https://www.infoworld.com/article/3715603/how-evolving-ai-regulations-impact-cybersecurity.html#jump">To read this article in full, please click here</a></p>
</section>
</article>
</div><p>The post <a href="https://www.azalio.io/how-evolving-ai-regulations-impact-cybersecurity/">How evolving AI regulations impact cybersecurity</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>GitHub Artifact Attestations now generally available</title>
		<link>https://www.azalio.io/github-artifact-attestations-now-generally-available/</link>
		
		<dc:creator><![CDATA[Azalio tdshpsk]]></dc:creator>
		<pubDate>Fri, 28 Jun 2024 02:03:57 +0000</pubDate>
				<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://www.azalio.io/github-artifact-attestations-now-generally-available/</guid>

					<description><![CDATA[<p>GitHub’s Artfact Attestations, for guaranteeing the integrity of artifacts built inside the GitHub Actions CI/CD platform, is now generally available. General availability was announced June 25. By using Artifact Attestations in GitHub Actions workflows, developers can improve security and protect against supply chain attacks and unauthorized modifications, GitHub said. As part of the announcement, GitHub [&#8230;]</p>
<p>The post <a href="https://www.azalio.io/github-artifact-attestations-now-generally-available/">GitHub Artifact Attestations now generally available</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></description>
										<content:encoded><![CDATA[<div>
<article>
<section class="page">
<p>GitHub’s Artfact Attestations, for guaranteeing the integrity of artifacts built inside the GitHub Actions CI/CD platform, is now generally available.</p>
<p><a href="https://github.blog/changelog/2024-06-25-artifact-attestations-is-generally-available/" rel="nofollow">General availability was announced June 25</a>. By using Artifact Attestations in GitHub Actions workflows, developers can improve security and protect against supply chain attacks and unauthorized modifications, GitHub said. As part of the announcement, GitHub also introduced the Kubernetes Policy Controller, which lets developers validate attestations directly within <a href="https://www.infoworld.com/article/3268073/what-is-kubernetes-your-next-application-platform.html">Kubernetes</a> as an added layer of security.</p>
<p class="jumpTag"><a href="https://www.infoworld.com/article/3715705/github-artifact-attestations-now-generally-available.html#jump">To read this article in full, please click here</a></p>
</section>
</article>
</div><p>The post <a href="https://www.azalio.io/github-artifact-attestations-now-generally-available/">GitHub Artifact Attestations now generally available</a> first appeared on <a href="https://www.azalio.io">Azalio</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
