login
Mixup数据增强方法原始提出者核查
2025/08/15·StepFun can make mistakes, check important info.
“Zhou et al.(2021)在 arXiv 提出 Mixup 数据增强方法”,是否正确?

Deep Research

预备调查清单

一、待核查内容列表

  1. Zhou et al.(2021)在 arXiv 提出 Mixup 数据增强方法

二、核查要点展开

待核查内容1:Zhou et al.(2021)在 arXiv 提出 Mixup 数据增强方法

  • 核查原因:
    1. 该陈述包含三个核心事实要素:作者(Zhou et al.)、时间(2021)和发布平台(arXiv),这些信息组合在一起可能存在事实错误或张冠李戴的情况。
    2. “Mixup”是一种在机器学习领域广为人知的数据增强方法,其原创归属有明确的学术共识,任何与此共识不符的说法都需严格核查。
  • 所需查明信息:
    1. 首次提出“Mixup”数据增强方法的原始论文的作者、年份及发表平台(或会议)。
    2. 是否存在一篇由“Zhou et al.”于2021年在arXiv上发表的、与“Mixup”方法相关的论文。
    3. 如果存在该论文,其内容是首次提出“Mixup”方法,还是对该方法的改进、分析或应用。
  • 必要信源:
    1. 权威学术搜索引擎(如 Google Scholar, Semantic Scholar)。
    2. 预印本网站 arXiv.org。
    3. 顶级人工智能或机器学习会议(如 ICLR, NeurIPS, ICML)的官方论文库。
  • 核查优先级:高

行动计划

  1. 通过权威学术搜索引擎,检索关键词“Mixup data augmentation”,查找并确认被学术界公认为首次提出该方法的原始论文。
  2. 核实原始论文的作者、发表年份以及首次发布的平台(例如,是直接发表于某学术会议,还是先在arXiv发布预印本)。
  3. 在arXiv及其他学术数据库中,专门检索作者包含“Zhou”、年份为“2021”且主题与“Mixup”相关的论文。
  4. 对比步骤1、2中查证的原始论文信息与步骤3的检索结果,判断用户报告中的陈述是否准确,即“Zhou et al. (2021)”是否为“Mixup”方法的提出者。
搜索

Search

Mixup data augmentation original paper author year
openreview.net
openreview.net
The authors cite recent work by DeVries & Taylor (2017) and Pereyra et al. (2017), but the technique of combining multiple samples for data augmentation have ...
sciencedirect.com
sciencedirect.com
Although not specifically a survey paper, Mikołajczyk and Grochowski [24] in 2018 presented an elaborate discussion of data augmentation approaches to improving ...
openaccess.thecvf.com
openaccess.thecvf.com
In this paper, we examine the effectiveness of Mixup for in-the-wild FER in which data have large variations in head poses, illumination conditions, backgrounds ...
知乎
知乎
作者丨Fareise 来源丨圆圆的算法笔记 编辑丨极市平台 数据增强黑科技Mixup:9篇顶会论文带你了解发展脉络 Mixup是发源于CV领域的一种数据增强技术,发展到现在不仅在CV领域,在NL
知乎
知乎
20230321 第205篇 arxiv.org/pdf/2303.08433.pdf 作者:Difan Zou, Yuan Cao, Yuanzhi Li, Quanquan Gu Affi
AMiner
AMiner
Multiple instance learning exhibits a powerful approach for whole slide image-based diagnosis in the absence of pixel-or patch-level annotations.In spite of the huge size of hole slide images,the number of individual slides is often rather small,leading to...
掌桥科研
掌桥科研
A method of training a student neural network is provided.The method includes feeding a data set including a plurality of input vectors into a teacher neural network to generate a plurality of output values,and converting two of the plurality of o...
搜狐网
搜狐网
G-Mixup:Graph Data Augmentation for Graph Classification 论文链接: https://arxiv.org/pdf/2202.07179.pdf 文章来自获得杰出论文奖的莱斯大学胡侠团队 作者: Xiaotian Han、Zhimeng Jiang、Ninghao Liu、Xia Hu 在这项研究中,作者提出了一种新的图数据增广方法:-Mixup,实验表明,-Mixup 能够提高图神...
https://static-favicon.basemind.com/01JV1YTCS74Q72YPZ204HZDKYE
https://s2.zimgs.cn/ims?kt=url&at=smstruct&key=aHR0cHM6Ly9jZG4uc20uY24vdGVtcC8yMDIzMTEwODEwMjgwMS1zY2VsdXFpYWlicGpqb3hyZWdsb2s1Y3B0dWdteXMzbS5qcGc=&sign=yx:rWswce9emvVrFpo6R_aTBrpVXNY=&tv=400_400
https://s2.zimgs.cn/ims?kt=url&at=smstruct&key=aHR0cHM6Ly9jZG4uc20uY24vdGVtcC8yMDIzMTEwODEwMjgwMS1zY2VsdXFpYWlicGpqb3hyZWdsb2s1Y3B0dWdteXMzbS5qcGc=&sign=yx:rWswce9emvVrFpo6R_aTBrpVXNY=&tv=400_400
View all 16 sources

我需要更深入地了解Mixup数据增强方法的原始提出者和时间。让我搜索更具体的信息。

搜索

Search

mixup: Beyond Empirical Risk Minimization Zhang et al. 2017
文件
文件
[PDF] Semi-Supervised Learning with Variational Bayesian Inference and Maximum Uncertainty Regularization
文件
文件
[PDF] Mitigating Demographic Bias in Facial Datasets with Style-Based Multi-attribute Transfer
百度学术
百度学术
Large deep neural networks are powerful,but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples.In this work,we propose mixup,a simple learning principle to alleviate these issues.In essence,mixup tr...
arxiv.org
arxiv.org
We adapt an open-source implementation (Zhang, 2017) to generate three CIFAR-10 training sets, where 20%, 50%, or 80% of the labels are replaced ...
arxiv.org
arxiv.org
We propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples ...
openreview.net
openreview.net
Following Zhang et al. (2017), we evaluate the robustness of ERM and mixup models against randomly corrupted labels. We hypothesize that increasing the strength ...
openreview.net
openreview.net
We propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples ...
博客园
博客园
数据增广,形式化为VRM(Vicinal Risk Minimization)领域风险最小化,已证明数据增广可以提升泛化性能。但是这个过程是依赖于数据的,因此就需要用到专家知识,而且数据增广假设样本的领域共享同一个标签,并没有对不同类别的样本进行建模领域关系。主要贡献 提供一种简单数据无关的数据增广方式,mixup,对训练数据集中数据任意两两线性插值: 利用先验知识:对特征向量的线性插值会导致目标的线性插值,也就是说对y进行插值的合理性 mixup实现非...
https://s2.zimgs.cn/ims?kt=url&at=smstruct&key=aHR0cHM6Ly9ndy5hbGljZG4uY29tL0wxLzcyMy8xNTg0NDEyNDQ0LzgzLzdkLzI2LzgzN2QyNjBiZGI3ODg0NzQxYWNhMWMzZGQ5NWQwMTc2Lmljbw==&sign=yx:FgUrPkD016Rc89-ZBz7SYezxGYQ=&tv=400_400
https://s2.zimgs.cn/ims?kt=url&at=smstruct&key=aHR0cHM6Ly9ndy5hbGljZG4uY29tL0wxLzcyMy8xNTg0NDEyNDQ0LzgzLzdkLzI2LzgzN2QyNjBiZGI3ODg0NzQxYWNhMWMzZGQ5NWQwMTc2Lmljbw==&sign=yx:FgUrPkD016Rc89-ZBz7SYezxGYQ=&tv=400_400
https://static-favicon.basemind.com/01JV6J6P7Z14ACVCGFP1H46VNY
View all 18 sources

我需要进一步确认Mixup数据增强方法的原始论文信息,特别是作者和发表年份。

搜索

Search

mixup: Beyond Empirical Risk Minimization Zhang Cisse Dauphin Lopez-Paz 2017 ICLR
文件
文件
[PDF] Semi-Supervised Learning with Variational Bayesian Inference and Maximum Uncertainty Regularization
文件
文件
[PDF] Improving Out-of-Distribution Robustness of Classifiers via Generative Interpolation
arxiv.org
arxiv.org
We propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples ...
openreview.net
openreview.net
We propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples ...
openreview.net
openreview.net
Published as a conference paper at ICLR 2018 mixup: BEYOND EMPIRICAL RISK MINIMIZATION. Hongyi Zhang. MIT. Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz∗.
百度学术
百度学术
Large deep neural networks are powerful,but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples.In this work,we propose mixup,a simple learning principle to alleviate these issues.In essence,mixup tr...
博客园
博客园
数据增广,形式化为VRM(Vicinal Risk Minimization)领域风险最小化,已证明数据增广可以提升泛化性能。但是这个过程是依赖于数据的,因此就需要用到专家知识,而且数据增广假设样本的领域共享同一个标签,并没有对不同类别的样本进行建模领域关系。主要贡献 提供一种简单数据无关的数据增广方式,mixup,对训练数据集中数据任意两两线性插值: 利用先验知识:对特征向量的线性插值会导致目标的线性插值,也就是说对y进行插值的合理性 mixup实现非...
bilibili
bilibili
14-mixup-Beyond Empirical Risk Minimizal 经验风险最小化是【人工智能学习】逐句阅读100篇核心AI论文(双语字幕)的第14集视频,该合集共计91集,视频收藏或关注UP主,及时了解更多相关视频内容。
https://s2.zimgs.cn/ims?kt=url&at=smstruct&key=aHR0cHM6Ly9ndy5hbGljZG4uY29tL0wxLzcyMy8xNTg0NDEyNDQ0LzgzLzdkLzI2LzgzN2QyNjBiZGI3ODg0NzQxYWNhMWMzZGQ5NWQwMTc2Lmljbw==&sign=yx:FgUrPkD016Rc89-ZBz7SYezxGYQ=&tv=400_400
https://static-favicon.basemind.com/01JV6J6P7Z14ACVCGFP1H46VNY
https://static-favicon.basemind.com/01JV1N7WQ0G4XQ639922T0QW82
View all 18 sources

我需要查看原始论文的发表会议和年份,让我访问arXiv上的原始论文。

网页解析

已核查结论清单

已核查内容1:Mixup 数据增强方法的原始提出者、年份及发布平台

  • 初步结论:Mixup 数据增强方法由 Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, 和 David Lopez-Paz 在论文《mixup: Beyond Empirical Risk Minimization》中首次提出。该论文的预印本于2017年首次发布在arXiv,并于2018年正式发表在ICLR会议。
  • 信源支撑要点:
    1. 权威学术预印本网站 arXiv.org 显示,论文《mixup: Beyond Empirical Risk Minimization》的作者为 Hongyi Zhang 等人,初版提交于2017年10月25日。
    2. 该论文的页面信息明确指出,此为ICLR会议的最终接收版本(camera ready version),表明其正式发表于2018年的ICLR会议。
  • 证据:
    1. 原文摘录:Title: mixup: Beyond Empirical Risk Minimization. Authors: Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz. ... Submitted on 25 Oct 2017 (v1), last revised 27 Apr 2018 (this version, v2). Comments: ICLR camera ready version.
      链接:https://arxiv.org/abs/1710.09412
      信源名称:arXiv.org
      信源发布时间:2017年10月25日(初版)
    2. 原文摘录:Published as a conference paper at ICLR 2018 mixup: BEYOND EMPIRICAL RISK MINIMIZATION. Hongyi Zhang. MIT. Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz∗.
      链接:https://openreview.net/pdf?id=r1Ddp1-Rb
      信源名称:OpenReview
      信源发布时间:不详(内容指向ICLR 2018)
    3. 原文摘录:Hongyi Zhang, Moustapha Cissé, Yann N. Dauphin, David Lopez-Paz: mixup: Beyond Empirical Risk Minimization. ICLR (Poster) 2018
      链接:https://dblp.org/rec/conf/iclr/ZhangCDL18
      信源名称:DBLP computer science bibliography
      信源发布时间:2019年07月25日(网站收录时间)
  • 信源结论一致性:一致
  • 是否需复核:否

待补充核查清单

待核查内容1:“Zhou et al.(2021)在 arXiv 提出 Mixup 数据增强方法”

  • 核查原因:
    1. 已查证的 Mixup 原始论文作者为 Zhang et al. (2017/2018),与该陈述中的作者(Zhou et al.)和年份(2021)均不符。
    2. 需核实是否存在一篇由 "Zhou et al." 于2021年在 arXiv 发表的与 Mixup 相关的论文,并确定其内容是否为“提出”该方法,以对用户的陈述进行全面判断。
  • 所需查明信息:
    1. 是否存在一篇作者包含 "Zhou"、年份为 "2021"、发布于 arXiv 且主题为 "Mixup" 的论文。
    2. 如果存在,该论文的具体内容是什么?是首次提出 Mixup,还是对其的改进、应用或分析?
  • 必要信源:
    1. arXiv.org 网站。
    2. Google Scholar, Semantic Scholar 等权威学术搜索引擎。
  • 核查优先级:高

行动计划

  1. 在 arXiv.org 及 Google Scholar 等学术搜索引擎中,使用关键词组合 "Zhou", "Mixup", "2021", "arXiv" 进行精确检索。
  2. 分析检索结果,确定是否存在符合用户描述的论文。
  3. 如果找到相关论文,则审查其摘要和引言,判断其与 Mixup 原始方法的关系,确认其贡献是原创、改进还是分析。
搜索

Search

Zhou et al. 2021 arXiv Mixup data augmentation
arxiv.org
arxiv.org
By generating new yet effective data, data augmentation has become a promising method to mitigate the data sparsity problem in sequential ...
arxiv.org
arxiv.org
Mixup is an effective data augmentation method that generates new augmented samples by aggregating linear combinations of different original ...
arxiv.org
arxiv.org
We evaluate the data augmentation methods on LineVul (Fu and Tantithamthavorn, 2022) , a SOTA token-based DLVD, with BigVul (Fan et al., 2020) ...
journal.hep.com.cn
journal.hep.com.cn
N,and P.Their studies are notable for the unusual treatment combinations and wide range of data collected – the authors provided the experimental context and data required to calculate microbial specific respiration and extracellular enzy...
AMiner
AMiner
Howes et al.Reply to Comment on"Kinetic Simulations of Magnetized Turbulence in Astrophysical Plasmas"arXiv:0711.4355
掌桥科研
掌桥科研
3.An evaluation of spire radio occultation data in assimilative ionospheric model GPSII and validation by ionosonde measurements[J].Kramer Kelsey K.,Fridman Sergey V.,Nickisch L.J.Radio Science.2021,第4期 机译:对同化电离层模型GPSII的尖端无线电常见数据评估及Ionoso...
掌桥科研
掌桥科研
Abstract Studies were performed to evaluate the applicability and validation of the PCR test according to Inoue and Takikawa(2021)in the diagnosis of Pseudomonas syringae pv.maculicola(McCulloch)Young et al.(Psm).The test was optimized fo...
掌桥科研
掌桥科研
Recently,Lassoued et al,reported preparation and characterization of a hybrid compound,(C6H10N2)(2)Cd3Cl10[1].Unfortunately,some of the data as presented by the authors are in strong conflict with the explanation.(C)2018 Elsevier B.V.All ...
https://static-favicon.basemind.com/01JV4C8WXWFGHEXJ6SH1R5G2SP
https://static-favicon.basemind.com/01JV4C8WXWFGHEXJ6SH1R5G2SP
https://static-favicon.basemind.com/01JV4C8WXWFGHEXJ6SH1R5G2SP
View all 17 sources

我需要查找更具体的信息,看看是否存在Zhou等人在2021年发表的关于Mixup数据增强方法的论文。

搜索

Search

Zhou et al. 2021 arXiv Mixup data augmentation original paper
arxiv.org
arxiv.org
Abstract—Data augmentation is a series of techniques that generate high-quality artificial data by manipulating existing data samples.
arxiv.org
arxiv.org
Data augmentation on text data is not thoroughly researched as early as image data, possibly due to the discrete and correlated nature of text ...
arxiv.org
arxiv.org
This augmentation technique is generally applied in the graph classification task (You et al., 2020, 2021; Zeng and Xie, 2021; Pinheiro et al., 2022; Zhou et al ...
知乎
知乎
作者丨Fareise 来源丨圆圆的算法笔记 编辑丨极市平台 数据增强黑科技Mixup:9篇顶会论文带你了解发展脉络 Mixup是发源于CV领域的一种数据增强技术,发展到现在不仅在CV领域,在NL
journal.hep.com.cn
journal.hep.com.cn
N,and P.Their studies are notable for the unusual treatment combinations and wide range of data collected – the authors provided the experimental context and data required to calculate microbial specific respiration and extracellular enzy...
AMiner
AMiner
Howes et al.Reply to Comment on"Kinetic Simulations of Magnetized Turbulence in Astrophysical Plasmas"arXiv:0711.4355
掌桥科研
掌桥科研
3.An evaluation of spire radio occultation data in assimilative ionospheric model GPSII and validation by ionosonde measurements[J].Kramer Kelsey K.,Fridman Sergey V.,Nickisch L.J.Radio Science.2021,第4期 机译:对同化电离层模型GPSII的尖端无线电常见数据评估及Ionoso...
掌桥科研
掌桥科研
Abstract Studies were performed to evaluate the applicability and validation of the PCR test according to Inoue and Takikawa(2021)in the diagnosis of Pseudomonas syringae pv.maculicola(McCulloch)Young et al.(Psm).The test was optimized fo...
https://s2.zimgs.cn/ims?kt=url&at=smstruct&key=aHR0cHM6Ly9jZG4uc20uY24vdGVtcC8yMDIxMDkxNDE3MTUxNC0wdmVxY2ZwYTk2Nmh2c3p5OHRmcW96a3l6cHp0a2VxYy5KUEc=&sign=yx:kROpGG3jrz4QX9N1U0SMnACwTWg=&tv=400_400
https://static-favicon.basemind.com/01JV4C8WXWFGHEXJ6SH1R5G2SP
https://static-favicon.basemind.com/01JV4C8WXWFGHEXJ6SH1R5G2SP
View all 16 sources

我需要更具体地搜索关于Zhou et al. 2021年在arXiv上发表的关于Mixup的论文。

搜索

Search

Zhou et al. 2021 arXiv Mixup, sourced from arxiv.org
arxiv.org
arxiv.org
In this study, we delve into the last-layer activations of training data for deep networks subjected to mixup, aiming to uncover insights into ...
arxiv.org
arxiv.org
For estimating the performance of code-generating task, we use HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021) . ... Zhou, et al.
arxiv.org
arxiv.org
We perform experiments on FewGLUE (Schick & Schütze, 2021b), a widely used few-shot bench- mark (Zhou et al., 2021). Our results demonstrate ...
arxiv.org
arxiv.org
To fill this gap,we present a comprehensive benchmark tailored for IR in the LLM era,namely Cocktail,where the corpus contains both human-written and LLM-generated texts.Cocktail encompasses 16 retrieval datasets spanning different domains and tasks,enabli...
arxiv.org
arxiv.org
cooperative deep reinforcement learning framework(Coder)implements a decentralized-to-centralized coordinator to estimate the global Q-value for the entire traffic network[29].The regional control methods have successfully converged and identified globally...
arxiv.org
arxiv.org
In this paper, we aim to explore the augmentation of vulnerabilities at the representation level to help current models learn better.
arxiv.org
arxiv.org
More recently, Zhou et al. [50] apply Similarity Ratio to weight the importance of base classes and thus select the optimal ones. Ji et al.
arxiv.org
arxiv.org
This work dives into this idea and presents a framework termed DRA-Ctrl that efficiently adapts video generators for diverse controllable image generation ...
https://static-favicon.basemind.com/01JV28GFZSVS4APS368GT11K70
https://static-favicon.basemind.com/01JV28GFZSVS4APS368GT11K70
View all 10 sources

我需要更具体地搜索Zhou等人在2021年发表的关于Mixup的论文,特别是查看是否有他们首次提出Mixup方法的论文。

搜索

Search

Zhou et al. 2021 Mixup data augmentation original paper first propose
arxiv.org
arxiv.org
This paper proposes a new Mixup method called AMPLIFY. This method uses the Attention mechanism of Transformer itself to reduce the influence of noises and ...
link.springer.com
link.springer.com
This survey focuses on two DA research streams: image mixing and automated selection of augmentation strategies.
sciencedirect.com
sciencedirect.com
This paper presents an extensive and thorough review of data augmentation methods applicable in computer vision domains.
机器之心
机器之心
这个方法本身不是很复杂,算法的流程主要分三步:首先,它会去筛选测试样本的近邻,利用 Mixup 生成扰动数据;然后,它会对测试样本进行有限制的扰动得到反例样本;最后,它通过度量反例样本,计算测试样本的特征重要度,来给出任意模型的可解释性。在图像数据上面,我们这个方法筛选出了前 200 个重要特征。画成图的话,这些特征基本上贴合在数字的边界上,这样一看就知道挖掘出来的重要特征确实在上面。在工业界中常用的表格上面,我们也会先把重要特征挖掘出来,再训练成模型,然后看模型挖掘出来的重要特征和其他方...
SpringerLink
SpringerLink
Mangla P,Kumari N,Sinha A,et al(2020)Charting the right manifold:Manifold mixup for few-shot learning.In:Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision,pp 2218–2227 Meihao F(2021)Few-shot multi-h...
自动化学报
自动化学报
DAI Qi Received his Ph.D.degree in control theory and control engineering from China University of Petroleum,Beijing in 2024.His research interest covers data mining and machine learning XIA Peng-Fei Ph.D.candidate at the College of Computer Scien...
清华大学
清华大学
[7]Liu Y,Yuan Z,Mao H,et al.Make Acoustic and Visual Cues Matter:CH-SIMS v2.0 Dataset and AV-Mixup Consistent Module[C]/Proceedings of the 2022 International Conference on Multimodal Interaction.2022:247-258. [8]Yuan Z,Li W,Xu H,et al...
合肥工业大学教师主页
合肥工业大学教师主页
L.Wu,Y.Wang*,J.Wang,Q.Tian,M.Wang.Towards Generating Discriminant Person Images with Manifold Mixup.IEEE Trans.Pattern Analysis and Machine Intelligence,IF:17.73,2020.(minor revision) J.Peng,Y.Wang*,H.Wang,Z.Zhang,X.Fu,M.Wang.Unsupervised Vehicle ...
https://static-favicon.basemind.com/01JV20CK4WKJPHQSAQJMKQ9VDM
https://static-favicon.basemind.com/01JV1MWZENABAYS2XCZZ18YQ0B
https://static-favicon.basemind.com/01JV28GFZSVS4APS368GT11K70
View all 15 sources

已核查结论清单

已核查内容1:Mixup 数据增强方法的原始提出者、年份及发布平台

  • 初步结论:Mixup 数据增强方法由 Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, 和 David Lopez-Paz 在论文《mixup: Beyond Empirical Risk Minimization》中首次提出。该论文的预印本于2017年首次发布在arXiv,并于2018年正式发表在ICLR会议。
  • 信源支撑要点:
    1. 权威学术预印本网站 arXiv.org 显示,论文《mixup: Beyond Empirical Risk Minimization》的作者为 Hongyi Zhang 等人,初版提交于2017年10月25日。
    2. 该论文的页面信息明确指出,此为ICLR会议的最终接收版本(camera ready version),表明其正式发表于2018年的ICLR会议。
  • 证据:
    1. 原文摘录:Title: mixup: Beyond Empirical Risk Minimization. Authors: Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz. ... Submitted on 25 Oct 2017 (v1), last revised 27 Apr 2018 (this version, v2). Comments: ICLR camera ready version.
      链接:https://arxiv.org/abs/1710.09412
      信源名称:arXiv.org
      信源发布时间:2017年10月25日(初版)
    2. 原文摘录:Published as a conference paper at ICLR 2018 mixup: BEYOND EMPIRICAL RISK MINIMIZATION. Hongyi Zhang. MIT. Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz∗.
      链接:https://openreview.net/pdf?id=r1Ddp1-Rb
      信源名称:OpenReview
      信源发布时间:不详(内容指向ICLR 2018)
    3. 原文摘录:Hongyi Zhang, Moustapha Cissé, Yann N. Dauphin, David Lopez-Paz: mixup: Beyond Empirical Risk Minimization. ICLR (Poster) 2018
      链接:https://dblp.org/rec/conf/iclr/ZhangCDL18
      信源名称:DBLP computer science bibliography
      信源发布时间:2019年07月25日(网站收录时间)
  • 信源结论一致性:一致
  • 是否需复核:否

已核查内容2:“Zhou et al.(2021)在 arXiv 提出 Mixup 数据增强方法”的准确性

  • 初步结论:该陈述不准确。经过多轮检索,未发现任何由 "Zhou et al." 于2021年在arXiv上首次提出 Mixup 数据增强方法的论文。用户的陈述在作者、年份和贡献性质上均存在事实错误。
  • 信源支撑要点:
    1. 在arXiv及Google Scholar等学术数据库中,使用关键词组合 "Zhou", "Mixup", "2021", "arXiv", "original paper", "first propose" 进行的多次检索,均未能找到符合“首次提出Mixup”这一描述的论文。
    2. 检索到的部分提及 "Zhou et al. (2021)" 的论文,其研究主题并非Mixup的原创性提出。例如,一篇论文提及 "Zhou et al. (2021)" 使用T5模型生成新文本,这是一种不同的数据增强技术。
  • 证据:
    1. 原文摘录:Yoo et al. (2021) and Zhou et al. (2021) use GPT-3 (Brown et al., 2020) and T5 (Raffel et al., 2020) respectively as the language model to generate new text ...
      链接:https://openreview.net/pdf?id=kPPVmUF6bM_
      信源名称:OpenReview
      信源发布时间:不详
    2. 对arXiv.org网站使用 "Zhou", "2021", "Mixup" 等关键词的直接搜索,未返回任何声称首次提出Mixup方法的论文。返回的结果多为对Mixup的应用、改进或在综述中提及该方法,而非原创性提出。
  • 信源结论一致性:一致
  • 是否需复核:否

待补充核查清单

(当前所有待核查内容均已完成,无需补充核查清单。)

行动计划

  1. 整合已核查的两个核心结论,判定用户报告“Zhou et al.(2021)在 arXiv 提出 Mixup 数据增强方法”为错误
  2. 撰写最终的事实核查报告,报告结构应包括:
    • 最终结论:明确指出陈述错误。
    • 事实陈述:清晰说明Mixup方法的正确来源(作者:Zhang et al.,年份:2017/2018,平台:arXiv/ICLR)。
    • 错误点分析:逐一剖析用户报告中的错误要素(作者、年份),并基于核查证据进行驳斥。
    • 可能的错误原因:推测产生该错误信息的原因,如将应用研究误认为原创研究,或信息在传播中出现偏差。

核查|Mixup数据增强方法的提出者与时间【陈述完全不准确,不可用于学术引用】

核查结果

本次核查共涉及1条内容,经证据核实,该内容完全不准确。

核查发现,Mixup数据增强方法并非由"Zhou et al.(2021)在arXiv提出",而是由Zhang等人于2017年在arXiv首次发表,并于2018年在ICLR会议正式发表。该陈述在作者归属、发表年份和贡献性质上均存在明显错误,不应在学术研究或引用中使用。

建议用户在引用Mixup方法时,应正确引用原始论文:Zhang, H., Cisse, M., Dauphin, Y. N., & Lopez-Paz, D. (2018). mixup: Beyond empirical risk minimization. International Conference on Learning Representations (ICLR).

核查过程

原文陈述可靠性分析证据
"Zhou et al.(2021)在 arXiv 提出 Mixup 数据增强方法"完全不准确。Mixup数据增强方法最初由Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin和David Lopez-Paz提出,而非Zhou等人。该方法的预印本于2017年10月首次发布在arXiv上,并于2018年在ICLR会议正式发表,而非2021年。因此,该陈述在作者归属、发表时间和贡献性质上均存在事实错误。1. "Title: mixup: Beyond Empirical Risk Minimization. Authors: Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz. ... Submitted on 25 Oct 2017 (v1), last revised 27 Apr 2018 (this version, v2). Comments: ICLR camera ready version." https://arxiv.org/abs/1710.09412(2017年10月25日)

2. "Published as a conference paper at ICLR 2018 mixup: BEYOND EMPIRICAL RISK MINIMIZATION. Hongyi Zhang. MIT. Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz∗." https://openreview.net/pdf?id=r1Ddp1-Rb(发布于ICLR 2018)

3. "Hongyi Zhang, Moustapha Cissé, Yann N. Dauphin, David Lopez-Paz: mixup: Beyond Empirical Risk Minimization. ICLR (Poster) 2018" https://dblp.org/rec/conf/iclr/ZhangCDL18(DBLP收录时间:2019年07月25日)