人工智能 | ShowMeAI资讯日报 #2022.06.26

持续创作,加速生长!这是我参加「日新方案 6 月更文应战」的第28天,点击检查活动详情

ShowMeAI日报系列全新晋级!掩盖AI人工智能 东西&结构 | 项目&代码 | 博文&共享 | 数据&资源 | 研讨&论文 等方向。点击检查 历史文章列表,在大众号内订阅论题 #ShowMeAI资讯日报,可接收每日最新推送。点击 专题合辑&电子月刊 快速阅读各专题全集。

1.东西&结构

人工智能 | ShowMeAI资讯日报 #2022.06.26

东西:Try to hijack AI!- 机器学习模型漏洞检测东西箱

tags: [机器学习,模型漏洞,检测东西]

‘Try to hijack AI! – reveal the vulnerabilities of machine learning models, algorithms for AI security such as Model Inversion, Poisoning Attack, Evasion Attack, Differential Privacy, and Homomorphic Encryption’ by Syumei

GitHub: github.com/Koukyosyume…

人工智能 | ShowMeAI资讯日报 #2022.06.26

东西:Zotero PDF Translate- Zotero 6的PDF翻译插件

tags: [PDF插件,翻译]

‘Zotero PDF Translate – PDF translation add-on for Zotero 6’ by windingwind

GitHub: github.com/windingwind…

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

东西库:HyperTS- 易用,高效,统一的全管道自动时刻序列分析东西,支撑时刻序列猜测,分类及回归

tags: [时刻序列,时序分类,时序回归]

‘HyperTS – A Full-Pipeline Automated Time Series (AutoTS) Analysis Toolkit.’ by DataCanvasIO

GitHub: github.com/DataCanvasI…

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

东西库:KLineChart- 可高度自界说的轻量级k线图

tags: [k线图,可视化]

Lightweight k-line chart that can be highly customized. Zero dependencies. Support mobile.(可高度自界说的轻量级k线图,无第三方依靠,支撑移动端)’ by liihuu

GitHub: github.com/liihuu/KLin…

人工智能 | ShowMeAI资讯日报 #2022.06.26

东西库:Auto-Causality- 自动化因果推理库

tags: [因果推理,归因]

‘Auto-Causality: A library for automated Causal Inference model estimation and selection – AutoML for causal inference.’ by TransferWise Ltd.

GitHub: github.com/transferwis…

人工智能 | ShowMeAI资讯日报 #2022.06.26

东西库:Feast- 机器学习特征存储东西库

tags: [特征存储,机器学习]

‘Feast – Feature Store for Machine Learning’

Feast是一个用于机器学习的开源特征存储东西库。Feast是模型练习和在线推理生产分析数据的最快途径之一。

GitHub: github.com/feast-dev/f…

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

2.博文&共享

人工智能 | ShowMeAI资讯日报 #2022.06.26

共享:精益副业- 程序员怎么高雅地做副业

tags: [副业,程序员]

GitHub: github.com/easychen/le…

Link: r.ftqq.com/lean-side-b…

人工智能 | ShowMeAI资讯日报 #2022.06.26

3.数据&资源

人工智能 | ShowMeAI资讯日报 #2022.06.26

数据集:HaGRID:手势辨认数据集

tags: [手势辨认,手势,数据集]

‘HaGRID – HAnd Gesture Recognition Image Dataset – HAnd Gesture Recognition Image Dataset’ by Alexander Kapitanov

GitHub: github.com/hukenovs/ha…

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

资源共享:PyData London 2022 PyMC贝叶斯建模教程资料

tags: [PyData,贝叶斯建模]

‘Probabilistic Python: An Introduction to Bayesian Modeling with PyMC – PyData London 2022 Tutorial’ by Chris Fonnesbeck

GitHub: github.com/fonnesbeck/…

人工智能 | ShowMeAI资讯日报 #2022.06.26

4.研讨&论文

人工智能 | ShowMeAI资讯日报 #2022.06.26

大众号回复关键字 日报,免费获取整理好的6月论文合辑。

论文:ARF: Artistic Radiance Fields

论文标题:ARF: Artistic Radiance Fields

论文时刻:13 Jun 2022

所属范畴:核算机视觉

对应使命:图画生成,艺术创作

论文地址:arxiv.org/abs/2206.06…

代码完成:github.com/Kai-46/ARF-…

论文作者:Kai Zhang, Nick Kolkin, Sai Bi, Fujun Luan, Zexiang Xu, Eli Shechtman, Noah Snavely

论文简介:We present a method for transferring the artistic features of an arbitrary style image to a 3D scene./咱们提出了一种将恣意风格图画的艺术特征转移到三维场景中的办法。

论文摘要:We present a method for transferring the artistic features of an arbitrary style image to a 3D scene. Previous methods that perform 3D stylization on point clouds or meshes are sensitive to geometric reconstruction errors for complex real-world scenes. Instead, we propose to stylize the more robust radiance field representation. We find that the commonly used Gram matrix-based loss tends to produce blurry results without faithful brushstrokes, and introduce a nearest neighbor-based loss that is highly effective at capturing style details while maintaining multi-view consistency. We also propose a novel deferred back-propagation method to enable optimization of memory-intensive radiance fields using style losses defined on full-resolution rendered images. Our extensive evaluation demonstrates that our method outperforms baselines by generating artistic appearance that more closely resembles the style image. Please check our project page for video results and open-source implementations: www.cs.cornell.edu/projects/ar…

咱们提出了一种将恣意风格图画的艺术特征转移到三维场景中的办法。曾经在点云或网格上进行三维风格化的办法对复杂的实在国际场景的几许重建过错很灵敏。相反,咱们建议对更强壮的辐射场标明进行风格化。咱们发现,常用的依据Gram矩阵的丢失往往会发生模糊的成果,而没有实在的笔触,因此咱们引入了依据近邻的丢失,它在捕捉风格细节方面非常有用,一起坚持了多视图的共同性。咱们还提出了一种新的延迟反向传达办法,以便利用在全分辨率烘托图画上界说的风格丢失来优化内存密集型的光影场。咱们的广泛评价标明,咱们的办法经过生成更接近于风格图画的艺术外观而优于基线。请检查咱们的项目页面,了解视频成果和开源完成:www.cs.cornell.edu/projects/ar… 。

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

论文:Zero-Shot AutoML with Pretrained Models

论文标题:Zero-Shot AutoML with Pretrained Models

论文时刻:16 Jun 2022

所属范畴:机器学习

对应使命:AutoML,Meta-Learning,自动化机器学习,元学习

论文地址:arxiv.org/abs/2206.08…

代码完成:github.com/automl/zero…

论文作者:Ekrem ztrk, Fabio Ferreira, Hadi S. Jomaa, Lars Schmidt-Thieme, Josif Grabocka, Frank Hutter

论文简介:Given a new dataset D and a low compute budget, how should we choose a pre-trained model to fine-tune to D, and set the fine-tuning hyperparameters without risking overfitting, particularly if D is small?/给定一个新的数据集D和低核算力约束,咱们应该怎么挑选一个预练习的模型来微调到D,并设置微调超参数而不冒过拟合的危险,特别是在D很小的时候?

论文摘要:Given a new dataset D and a low compute budget, how should we choose a pre-trained model to fine-tune to D, and set the fine-tuning hyperparameters without risking overfitting, particularly if D is small? Here, we extend automated machine learning (AutoML) to best make these choices. Our domain-independent meta-learning approach learns a zero-shot surrogate model which, at test time, allows to select the right deep learning (DL) pipeline (including the pre-trained model and fine-tuning hyperparameters) for a new dataset D given only trivial meta-features describing D such as image resolution or the number of classes. To train this zero-shot model, we collect performance data for many DL pipelines on a large collection of datasets and meta-train on this data to minimize a pairwise ranking objective. We evaluate our approach under the strict time limit of the vision track of the ChaLearn AutoDL challenge benchmark, clearly outperforming all challenge contenders.

给定一个新的数据集D和低核算量预算,咱们应该怎么挑选一个预练习的模型来微调到D,并设置微调超参数而不冒过度拟合的危险,特别是当D很小的时候?在这里,咱们扩展了自动机器学习(AutoML),以最好地做出这些挑选。咱们的独立于范畴的元学习办法学习了一个零样本的署理模型,在测验时,能够为一个新的数据集D挑选正确的深度学习(DL)管道(包含预练习的模型和微调超参数),只给定描述D的琐碎元特征,如图画分辨率或类的数量。为了练习这个零样本模型,咱们在很多的数据集上收集了许多DL管道的性能数据,并对这些数据进行元练习,以最小化成对的排名方针。咱们在ChaLearn AutoDL应战基准的视觉轨道的严厉时刻约束下评价了咱们的办法,显着优于一切应战的竞赛者。

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

论文:Learning Implicit Feature Alignment Function for Semantic Segmentation

论文标题:Learning Implicit Feature Alignment Function for Semantic Segmentation

论文时刻:17 Jun 2022

所属范畴:核算机视觉

对应使命:Semantic Segmentation,语义切割

论文地址:arxiv.org/abs/2206.08…

代码完成:github.com/hzhupku/ifa

论文作者:Hanzhe Hu, Yinbo Chen, Jiarui Xu, Shubhankar Borse, Hong Cai, Fatih Porikli, Xiaolong Wang

论文简介:As such, IFA implicitly aligns the feature maps at different levels and is capable of producing segmentation maps in arbitrary resolutions./因此,IFA隐含着对不同层次的特征图的对齐,并且能够发生恣意分辨率的切割图。

论文摘要:Integrating high-level context information with low-level details is of central importance in semantic segmentation. Towards this end, most existing segmentation models apply bilinear up-sampling and convolutions to feature maps of different scales, and then align them at the same resolution. However, bilinear up-sampling blurs the precise information learned in these feature maps and convolutions incur extra computation costs. To address these issues, we propose the Implicit Feature Alignment function (IFA). Our method is inspired by the rapidly expanding topic of implicit neural representations, where coordinate-based neural networks are used to designate fields of signals. In IFA, feature vectors are viewed as representing a 2D field of information. Given a query coordinate, nearby feature vectors with their relative coordinates are taken from the multi-level feature maps and then fed into an MLP to generate the corresponding output. As such, IFA implicitly aligns the feature maps at different levels and is capable of producing segmentation maps in arbitrary resolutions. We demonstrate the efficacy of IFA on multiple datasets, including Cityscapes, PASCAL Context, and ADE20K. Our method can be combined with improvement on various architectures, and it achieves state-of-the-art computation-accuracy trade-off on common benchmarks. Code will be made available at github.com/hzhupku/ifa

在语义切割中,将高水平的上下文信息与低水平的细节结合起来是非常重要的。为此,大多数现有的切割模型对不同规范的特征图进行双线性上采样和卷积,然后在同一分辨率下对它们进行排列。但是,双线性上采样模糊了在这些特征图中学习到的精确信息,而卷积则会发生额外的核算成本。为了处理这些问题,咱们提出了隐式特征对齐功用(IFA)。咱们的办法受到了敏捷扩大的隐性神经表征主题的启发,其中依据坐标的神经网络被用来指定信号的范畴。在IFA中,特征向量被看作是代表一个二维信息场。给定一个查询坐标,邻近的特征向量及其相对坐标从多级特征图中获取,然后输入MLP以发生相应的输出。因此,IFA隐含地对齐了不同层次的特征图,并能够发生恣意分辨率的切割图。咱们在多个数据集上证明晰IFA的成效,包含Cityscapes、PASCAL Context和ADE20K。咱们的办法能够与各种架构上的改善相结合,并且在常见的基准上完成了最先进的核算-精确度权衡。代码将在[github.com/hzhupku/IFA…

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

论文:Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation

论文标题:Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation

论文时刻:CVPR 2022

所属范畴:核算机视觉

对应使命:2D Human Pose Estimation,Multi-Person Pose Estimation,Pose Estimation,二维人体姿态估量,多人姿态估量,姿态估量

论文地址:arxiv.org/abs/2205.01…

代码完成:github.com/mit-han-lab…

论文作者:Yihan Wang, Muyang Li, Han Cai, Wei-Ming Chen, Song Han

论文简介:Inspired by this finding, we design LitePose, an efficient single-branch architecture for pose estimation, and introduce two simple approaches to enhance the capacity of LitePose, including Fusion Deconv Head and Large Kernel Convs./在这一发现的启发下,咱们规划了LitePose,一个用于姿态估量的高效单分支架构,并介绍了两种简单的办法来进步LitePose的才能,包含融合Deconv Head和Large Kernel Convs。

论文摘要:Pose estimation plays a critical role in human-centered vision applications. However, it is difficult to deploy state-of-the-art HRNet-based pose estimation models on resource-constrained edge devices due to the high computational cost (more than 150 GMACs per frame). In this paper, we study efficient architecture design for real-time multi-person pose estimation on edge. We reveal that HRNet’s high-resolution branches are redundant for models at the low-computation region via our gradual shrinking experiments. Removing them improves both efficiency and performance. Inspired by this finding, we design LitePose, an efficient single-branch architecture for pose estimation, and introduce two simple approaches to enhance the capacity of LitePose, including Fusion Deconv Head and Large Kernel Convs. Fusion Deconv Head removes the redundancy in high-resolution branches, allowing scale-aware feature fusion with low overhead. Large Kernel Convs significantly improve the model’s capacity and receptive field while maintaining a low computational cost. With only 25% computation increment, 7×7 kernels achieve +14.0 mAP better than 3×3 kernels on the CrowdPose dataset. On mobile platforms, LitePose reduces the latency by up to 5.0x without sacrificing performance, compared with prior state-of-the-art efficient pose estimation models, pushing the frontier of real-time multi-person pose estimation on edge. Our code and pre-trained models are released at github.com/mit-han-lab…

姿态估量在以人为本的视觉运用中起着关键作用。但是,因为核算成本高(每帧超过150个GMACs),很难在资源有限的边际设备上部署最先进的依据HRNet的姿态估量模型。在本文中,咱们研讨了在边际上进行实时多人姿态估量的有用架构规划。咱们经过逐渐缩小的试验发现,HRNet的高分辨率分支关于低核算区域的模型是多余的。去除这些分支后,功率和性能都有所进步。受这一发现的启发,咱们规划了LitePose,一个用于姿态估量的高效单分支架构,并介绍了两种简单的办法来进步LitePose的才能,包含Fusion Deconv Head和Large Kernel Convs。Fusion Deconv Head消除了高分辨率分支中的冗余,答应以低开销完成规模感知的特征融合。Large Kernel Convs大大改善了模型的容量和感触野,一起坚持了低核算成本。在CrowdPose数据集上,仅用25%的核算增量,7×7内核就比3×3内核完成了+14.0 mAP。在移动平台上,LitePose在不献身性能的情况下将延迟降低了5.0倍,与之前最先进的高效姿态估量模型比较,将实时多人姿态估量的前沿面向了边际。咱们的代码和预练习模型发布在github.com/mit-han-lab…。

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

论文:Flowformer: Linearizing Transformers with Conservation Flows

论文标题:Flowformer: Linearizing Transformers with Conservation Flows

论文时刻:13 Feb 2022

所属范畴:时刻序列

对应使命:时刻序列

论文地址:arxiv.org/abs/2202.06…

代码完成:github.com/thuml/Flowf…

论文作者:Haixu Wu, Jialong Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long

论文简介:By respectively conserving the incoming flow of sinks for source competition and the outgoing flow of sources for sink allocation, Flow-Attention inherently generates informative attentions without using specific inductive biases./经过别离保存用于源竞赛的汇的流入和用于汇分配的源的流出,Flow-Attention内在地发生了信息性的留意,而不运用特定的感应偏差。

论文摘要:Transformers based on the attention mechanism have achieved impressive success in various areas. However, the attention mechanism has a quadratic complexity, significantly impeding Transformers from dealing with numerous tokens and scaling up to bigger models. Previous methods mainly utilize the similarity decomposition and the associativity of matrix multiplication to devise linear-time attention mechanisms. They avoid degeneration of attention to a trivial distribution by reintroducing inductive biases such as the locality, thereby at the expense of model generality and expressiveness. In this paper, we linearize Transformers free from specific inductive biases based on the flow network theory. We cast attention as the information flow aggregated from the sources (values) to the sinks (results) through the learned flow capacities (attentions). Within this framework, we apply the property of flow conservation into attention and propose the Flow-Attention mechanism of linear complexity. By respectively conserving the incoming flow of sinks for source competition and the outgoing flow of sources for sink allocation, Flow-Attention inherently generates informative attentions without using specific inductive biases. Empowered by the Flow-Attention, Flowformer yields strong performance in linear time for wide areas, including long sequence, time series, vision, natural language, and reinforcement learning. The code and settings are available at this repository: github.com/thuml/Flowf…

依据留意力机制的Transformer已经在各个范畴取得了令人瞩意图成功。但是,留意力机制具有二次复杂性,极大地阻碍了Transformers处理很多标记和扩展到更大的模型。曾经的办法主要是利用相似性分化和矩阵乘法的关联性来规划线性时刻留意机制。他们经过从头引入概括性的偏置,如位置,来避免留意力退化为琐碎的散布,从而献身了模型的通用性和表现力。在本文中,咱们在流网络理论的基础上对Transformer进行了线性化,不存在特定的概括性偏置。咱们把留意力看作是经过所学的流动才能(留意力)从源(值)到汇(成果)的信息流汇总。在这个结构内,咱们将流量守恒的特性运用于留意力,并提出了线性复杂性的流量-留意力机制。经过别离保护源竞赛的汇入流量和汇分配的源流出流量,Flow-Attention在不运用特定的概括偏置的情况下,内在地发生了信息性留意。在Flow-Attention的支撑下,Flowformer在线性时刻内发生了强壮的性能,包含长序列、时刻序列、视觉、自然语言和强化学习等广泛范畴。代码和设置可在这个资源库中找到:github.com/thuml/Flowf…

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

论文:HiVT: Hierarchical Vector Transformer for Multi-Agent Motion Prediction

论文标题:HiVT: Hierarchical Vector Transformer for Multi-Agent Motion Prediction

论文时刻:CVPR 2022

所属范畴:核算机视觉

对应使命:Autonomous Driving,Autonomous Vehicles,Motion Forecasting,motion prediction,Self-Driving Cars,Trajectory Prediction,无人驾驶,自动驾驶,运动猜测,自动驾驶轿车,轨道猜测

论文地址:openaccess.thecvf.com/content/CVP…

代码完成:github.com/ZikangZhou/…

论文作者:Zikang Zhou, Luyao Ye, JianPing Wang, Kui Wu, Kejie Lu

论文简介:To tackle this challenge, we propose Hierarchical Vector Transformer (HiVT) for fast and accurate multi-agent motion prediction./为了应对这一应战,咱们提出了层次向量Transformer(Hierarchical Vector Transformer,HiVT),用于快速而精确的多署理运动猜测。

论文摘要:Accurately predicting the future motions of surrounding traffic agents is critical for the safety of autonomous vehicles. Recently, vectorized approaches have dominated the motion prediction community due to their capability of capturing complex interactions in traffic scenes. However, existing methods neglect the symmetries of the problem and suffer from the expensive computational cost, facing the challenge of making real-time multi-agent motion prediction without sacrificing the prediction performance. To tackle this challenge, we propose Hierarchical Vector Transformer (HiVT) for fast and accurate multi-agent motion prediction. By decomposing the problem into local context extraction and global interaction modeling, our method can effectively and efficiently model a large number of agents in the scene. Meanwhile, we propose a translation-invariant scene representation and rotation-invariant spatial learning modules, which extract features robust to the geometric transformations of the scene and enable the model to make accurate predictions for multiple agents in a single forward pass. Experiments show that HiVT achieves the state-of-the-art performance on the Argoverse motion forecasting benchmark with a small model size and can make fast multi-agent motion prediction.

精确地猜测周围交通署理的未来运动对自主车辆的安全至关重要。最近,矢量办法在运动猜测范畴占主导地位,因为它们能够捕捉到交通场景中的复杂交互作用。但是,现有的办法疏忽了问题的对称性,并存在贵重的核算成本,面临着在不献身猜测性能的情况下进行实时多署理运动猜测的应战。为了应对这一应战,咱们提出了层次向量Transformer(Hierarchical Vector Transformer,HiVT),用于快速、精确的多Agent运动猜测。经过将问题分化为局部环境提取和全局交互建模,咱们的办法能够有用地对场景中的很多署理进行建模。一起,咱们提出了一个平移不变的场景标明和旋转不变的空间学习模块,这些模块提取了对场景的几许变换具有鲁棒性的特征,并使模型能够在一次向前传递中对多个署理进行精确的猜测。试验标明,HiVT在Argoverse运动猜测基准上以较小的模型规模达到了最先进的性能,并能进行快速的多Agent运动猜测。

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

论文:A Comprehensive Survey on Graph Anomaly Detection with Deep Learning

论文标题:A Comprehensive Survey on Graph Anomaly Detection with Deep Learning

论文时刻:14 Jun 2021

所属范畴:机器学习

对应使命:Anomaly Detection,反常检测

论文地址:arxiv.org/abs/2106.07…

代码完成:github.com/XiaoxiaoMa-… , github.com/xiaomingaaa…

论文作者:Xiaoxiao Ma, Jia Wu, Shan Xue, Jian Yang, Chuan Zhou, Quan Z. Sheng, Hui Xiong, Leman Akoglu

论文简介:In this survey, we aim to provide a systematic and comprehensive review of the contemporary deep learning techniques for graph anomaly detection./在这项查询中,咱们旨在对今世用于图反常检测的深度学习技能进行体系而全面的回忆。

论文摘要:Anomalies represent rare observations (e.g., data records or events) that deviate significantly from others. Over several decades, research on anomaly mining has received increasing interests due to the implications of these occurrences in a wide range of disciplines. Anomaly detection, which aims to identify rare observations, is among the most vital tasks in the world, and has shown its power in preventing detrimental events, such as financial fraud, network intrusion, and social spam. The detection task is typically solved by identifying outlying data points in the feature space and inherently overlooks the relational information in real-world data. Graphs have been prevalently used to represent the structural information, which raises the graph anomaly detection problem – identifying anomalous graph objects (i.e., nodes, edges and sub-graphs) in a single graph, or anomalous graphs in a database/set of graphs. However, conventional anomaly detection techniques cannot tackle this problem well because of the complexity of graph data. For the advent of deep learning, graph anomaly detection with deep learning has received a growing attention recently. In this survey, we aim to provide a systematic and comprehensive review of the contemporary deep learning techniques for graph anomaly detection. We compile open-sourced implementations, public datasets, and commonly-used evaluation metrics to provide affluent resources for future studies. More importantly, we highlight twelve extensive future research directions according to our survey results covering unsolved and emerging research problems and real-world applications. With this survey, our goal is to create a “one-stop-shop” that provides a unified understanding of the problem categories and existing approaches, publicly available hands-on resources, and high-impact open challenges for graph anomaly detection using deep learning.

反常值代表了显着偏离他人的稀有样本(如数据记载或事件)。几十年来,因为这些现象在广泛的学科中的影响,对反常挖掘的研讨受到越来越多的重视。反常检测的意图是辨认稀有的查询成果,是国际上最重要的使命之一,并在避免有害事件,如金融诈骗、网络入侵和社会垃圾邮件方面显现了其力量。检测使命通常是经过辨认特征空间中的离群数据点来处理的,本质上疏忽了实际国际数据中的联系信息。图已经被普遍用来标明结构信息,这就提出了图反常检测问题–辨认单个图中的反常图方针(即节点、边和子图),或数据库/图会集的反常图。但是,因为图数据的复杂性,传统的反常检测技能不能很好地处理这个问题。关于深度学习的出现,用深度学习进行图的反常检测最近得到了越来越多的重视。在这项查询中,咱们旨在对今世用于图反常检测的深度学习技能进行体系而全面的回忆。咱们汇编了开源的完成、公共数据集和常用的评价方针,为未来的研讨供给丰厚的资源。更重要的是,依据咱们的查询成果,咱们强调了12个广泛的未来研讨方向,包含未处理的和新式的研讨问题以及实际国际的运用。经过这项查询,咱们的方针是创立一个 “一站式服务”,供给对问题类别和现有办法的统一理解,揭露可用的实践资源,以及运用深度学习的图形反常检测的高影响力的揭露应战。

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

论文:Small-Text: Active Learning for Text Classification in Python

论文标题:Small-Text: Active Learning for Text Classification in Python

论文时刻:21 Jul 2021

所属范畴:自然语言处理

对应使命:Active Learning,Classification,Multi-class Classification,Multi Label Text Classification,Multi-Label Text Classification,Text Classification,自动学习,分类,多分类,多标签文本分类,文本分类

论文地址:arxiv.org/abs/2107.10…

代码完成:github.com/webis-de/sm…

论文作者:Christopher Schrder, Lydia Mller, Andreas Niekler, Martin Potthast

论文简介:We present small-text, a simple and modular active learning library, which offers pool-based active learning for single- and multi-label text classification in Python./咱们介绍了small-text,一个简单而模块化的自动学习库,它为Python中的单标签和多标签文本分类供给依据pool的自动学习。

论文摘要:We present small-text, a simple and modular active learning library, which offers pool-based active learning for single- and multi-label text classification in Python. It comes with various pre-implemented state-of-the-art query strategies, including some that can leverage the GPU. Clearly defined interfaces allow the combination of a multitude of classifiers, query strategies, and stopping criteria, thereby facilitating a quick mix and match, and enabling a rapid development of both active learning experiments and applications. To make various classifiers accessible in a consistent way, it integrates several well-known existing machine learning libraries, namely, scikit-learn, PyTorch, and huggingface transformers, where the latter integrations are available as optionally installable extensions, making the availability of a GPU competely optional. The library is available under the MIT License at github.com/webis-de/sm…

咱们介绍了small-text,一个简单而模块化的自动学习库,它为Python中的单标签和多标签文本分类供给了依据pool的自动学习。它带有各种预先完成的最先进的查询战略,包含一些能够利用GPU的战略。清晰界说的接口答应结合多种分类器、查询战略和停止规范,从而促进快速混合和匹配,并使自动学习试验和运用的快速发展。为了使各种分类器能够以共同的方式运用,它集成了几个著名的现有机器学习库,即scikit-learn、PyTorch和huggingface转化器,其中后者的集成能够作为可挑选安装的扩展,使GPU的可用性成为可选项。该库在MIT许可下可在 github.com/webis-de/sm… 获取。

人工智能 | ShowMeAI资讯日报 #2022.06.26

人工智能 | ShowMeAI资讯日报 #2022.06.26

咱们是 ShowMeAI,致力于传达AI优质内容,共享行业处理方案,用常识加速每一次技能生长!点击检查 历史文章列表,在大众号内订阅论题 #ShowMeAI资讯日报,可接收每日最新推送。点击 专题合辑&电子月刊 快速阅读各专题全集。

人工智能 | ShowMeAI资讯日报 #2022.06.26

  • 作者:韩信子@ShowMeAI
  • 历史文章列表
  • 专题合辑&电子月刊
  • 声明:版权一切,转载请联系平台与作者并注明出处
  • 欢迎回复,托付点赞,留言推荐中有价值的文章、东西或建议,咱们都会赶快回复哒~