1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > 2. SCI论文引言写作案例分析学习笔记

2. SCI论文引言写作案例分析学习笔记

时间:2019-08-08 08:36:07

相关推荐

2. SCI论文引言写作案例分析学习笔记

案例论文《A GPU-tailored approach for training kernelized SVMs》

课程视频

提出reasearch gap后需要马上进行文献综述或者提出自己的解决办法。reasearch gap可以先是大的概念的,然后再慢慢缩小到我们题目的research gap. 我们是如何做的,目的和意义是什么?

一、论文案例

1. Support Vector Machines (SVMs) are amongthe most popular general purpose learning methodsin use today.支持向量机(SVM)是当今最流行的通用学习方法之一。

作用:描述研究的重要性,总结常用词:the most popular

2. SVM learning amounts to learning a linear predictor, with regularization (corresponding to a "large margin")ensuring good generalization even in very high dimensions.SVM学习相当于学习一个线性预测器,正则化(对应于 "大边际")确保即使在非常高的维度上也有良好的泛化效果。

作用:给研究对象下定义,介绍研究对象的优势

3. The predictor need not be linear in the input representation; it is possible to learn a linear preddictor in someextremely high dimensional spacespecified implicitly through a kernel function.预测器不需要在输入表示中是线性的;有可能在一些通过核函数隐式指定的极高维空间中学习一个线性预测器。

作用:进一步描述SVM的优势

4. SVMs were originally suggested in the context of binary classification, but more recently variants following the same principles hae alse been developed and successfully applied to more complex prediction tasks such as multiclasss classification and prediction of structured outputs such as sequences.SVM最初是在二元分类的背景下提出的,但最近遵循相同原则的变体也被开发出来,并成功地应用于更复杂的预测任务,如多图分类和结构化输出的预测,如序列。

作用:与前3句话作用一致,描述SVM的其他变体从而说明SVM的广泛应用

5-6 Training an SVM amounts to solving a quadratic programming problems (see Section 2). Although genearl-purpose quadratic programming solverscan only handle fairly small SVM instances, much effort has been madein the past two decades to design special purpose solvers that can handle large-scale SVM instances.训练一个SVM 相当于求解一个二次规划问题(见第2节)。尽管通用的二次规划求解器只能处理较小规模的SVM实例,但在过去的二十年中,人们一直致力于设计能够处理大规模SVM实例的专用求解器。

研究问题,现有文献怎么解决的

作用:描述现阶段的research gap

7.This effortresuted in widely-used packages that can solve both "linear" SVMs (i.e. where the prediction is linear in the input representation) and "kernelized" SVMs (where a non-linear kernel defines the linear predictio space.)这一工作得到了广泛使用的包,可以同时解决"线性" SVMs (即在输入表示中预测是线性的)和"核化" SVMs (其中非线性核定义了线性预测空间。)

作用: 现有研究是如何解决research gap的。分为了两类解决:linear and kernelized SVM

8-9For linear SVMs,stochastic methods such as PEGASOS [13] and Stochastic Dual Coordinate Ascent [8]have recently been establishedas being effective at solving extremely large SVM instances, typically in less time than that which is required to read the data into memory.For kernel SVMs, most leading solvers are based ondecomposing the dual optimization problem into small subproblems.对于线性SVMs,PEGASOS [ 13 ]和Stochastic Dual Coordinate Ascent [ 8 ]等随机方法最近被建立,它们可以有效地解决非常大的SVM 实例,通常比将数据读入内存所需的时间更短。对于核SVM,大多数领先的求解器都是基于将对偶优化问题分解为小的子问题。

作用:文献综述,技巧:分类描述(继续描述这两类是如何解决的)

10-11 Such approaches can indeed (真正)handle (处理)fairly large problems, provided that the data fits in memory, butit is not uncommon(并不少见,很常见) for training to require many hours or days, even using state-of-the-art optimizers.There is thereforestill a strong need for faster training of kernel SVMs.这类方法的确可以处理相当大的问题,只要数据适合内存,但训练需要很多小时或几天,甚至使用最先进的优化器,这并不少见。因此,仍然迫切需要更快的核SVM训练。

作用:描述研究动机, research gap

12-13One attractive possibilityfor enabling faster SVM training is to leverage the power of Graphical Processing Units (GPUs).GPUs arehighly parallel, structured, computational engines and are now available relatively inexpensively and are found in many modern computers.In this paper we discuss howSVM training can be efficiently implementd on a GPU, and present such an implementation for both binary and multiclss SVMs.实现更快的SVM训练的一个有吸引力的可能性是利用图形处理器( GPU )的力量。GPU是高度并行的、结构化的、计算引擎,现在相对便宜,并且在许多现代计算机中被发现。本文讨论了如何在GPu上高效地实现SVM训练,并给出了二元和多元SVM的实现。

引用一个新的方法后,需要对其下定义。

作用:描述如何解决research gap:利用GPU加速

14-15 Several authors have recently proposed using GPUs for kernelized SVM training[3,2] and related problems [6].These previous approaches, however,primarilyfocused on pointing out the advantages of implementing standard algorithms ongraphics hardware, typically using GPU matrix-multiplication libraries, andnot on how these algorithms can be modified to better take advantage of the GPU architecture.一些作者最近提出了使用GPU进行核化SVM训练[ 3、2]及相关问题[ 6 ]。然而,这些先前的方法主要侧重于指出在图形硬件上实现标准算法的优势,通常使用GPU矩阵乘法库,而不是如何修改这些算法以更好地利用GPU体系结构。

作用:引入文献综述同时引入另外一个reasearch gap.

16-17We study variousalgorithmic choices for SVM training in the context of GPUs,discuss how the optimal choices and algorithms on a GPU are different than those for aserial implementation, and arrive at an implementation specifically designed forgraphics hardware.As with many previous approaches, we assume that the dataset fitsin memory, and focus mostly on the Gaussian kernel, although our implementation canhandle any kernel function which can be written in the form K(x, y) = f(x|, llyll〈 x, y>)(see Section 5.2), andour ideas apply even more broadlyto any kernel which is anaggregation of element-wise operations.我们研究了在GPU环境下SVM训练的各种算法选择,讨论了在GPU上的最优选择和算法如何不同于串行实现,并得出了一个专门为图形硬件设计的实现。与以前的许多方法一样,我们假设数据集适合内存,并且主要集中在高斯核上,尽管我们的实现可以处理任何可以写成K( x , y) = f( x | , llyll〈x , y) ) (见5.2节)形式的核函数,而且我们的想法更广泛地应用于任何核,这是一个元素操作的聚合。

作用:描述自己的研究工作如何解决research gap. 和前人的方法一样,但是进行了一点改进

18-20 one particularly significant drawback of other GPU SVM solvers is their lack ofsupport for sparse datasets. On the CPU, taking advantage of sparsity is a simple matter,and sparse datasets are encountered frequently enough that many widely-used SVMsolvers treat all input vectors as sparse, by default [9,4,1].On the GPU, however,maximum performance is only achieved if memory accesses follow certain fairlyrestrictive patterns, which are difficult to ensure with sparse data.其他GPU SVM求解器的一个特别显著的缺点是缺乏对稀疏数据集的支持。在CPU上,利用稀疏性是一件很简单的事情,而且经常遇到稀疏数据集,以至于许多广泛使用的SVM解算器默认为所有输入向量都是稀疏的,默认为[ 9、4、1]。然而,在GPU上,只有当内存访问遵循某些相当严格的模式时,才能实现最大的性能,而这些模式在稀疏数据下很难保证。

作用:引入新的research gap

21 In contrast to other GPU SVM solvers, our implementation does take advantage ofsparsity in the training set through a novel “sparsity clustering”approach(Section 5.3) 与其他GPU SVM求解器相比,我们的实现通过一种新的“稀疏聚类”方法在训练集中利用了稀疏性(第5.3节)

作用:我们的研究如何解决research gap

22 Overall, our implementation is orders of magnitudes faster than existing CPuimplementations,and several times faster on sparse datasets than prior GPu implementations of SVM training..总体而言,我们的实现比现有的CPu实现快几个数量级,在稀疏数据集上比之前的SVM训练的GPu实现快几倍。

作用:描述目的和意义

二、总结

1. 描述研究的重要性

常用词:widespread, much reasearch in recent years

2. 文献综述

注意:不要总范范的说 they did/showed/found,要具体:calculated(计算),monitored(监测),identified(识别)

3. 研究断层/问题/动机(可以有很多个,但是范围慢慢缩小,需要有文献综述,别人怎么解决的)

常用词:inefficient(效率低), unclear, few studies have focused on

4. 本文的研究内容/意义/目的

常用词:we propose , our approach, successful

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。