最新一期计算机领域不可错过的ScienceDirect高IF论文
更新时间:2020-12-02

01



第一张图.png


期刊:Inform. Syst.

影响因子(IF):2.466

发表时间:2020-10-30


摘 要


Cloud storage allows organizations to store data at remote sites of service providers. Although cloud storage services offer numerous benefits, they also involve new risks and challenges with respect to data security and privacy aspects. To preserve confidentiality, data must be encrypted before outsourcing to the cloud. Although this approach protects the security and privacy aspects of data, it also impedes regular functionality such as executing queries and performing analytical computations. To address this concern, specific data encryption schemes (e.g., deterministic, random, homomorphic, order-preserving, etc.) can be adopted that still support the execution of different types of queries (e.g., equality search, full-text search, etc.) over encrypted data.


However, these specialized data encryption schemes have to be implemented and integrated in the application and their adoption introduces an extra layer of complexity in the application code. Moreover, as these schemes imply trade-offs between performance and security, storage efficiency, etc, making the appropriate trade-off is a challenging and non-trivial task. In addition, to support aggregate queries, User Defined Functions (UDF) have to be implemented directly in the database engine and these implementations are specific to each underlying data storage technology, which demands expert knowledge and in turn increases management complexity.


In this paper, we introduce CryptDICE, a distributed data protection system that (i) provides built-in support for a number of different data encryption schemes, made accessible via annotations that represent application-specific (search) requirements; (ii) supports making appropriate trade-offs and execution of these encryption decisions at diverse levels of data granularity; and (iii) integrates a lightweight service that performs dynamic deployment of User Defined Functions (UDF) –without performing any alteration directly in the database engine– for heterogeneous NoSQL databases in order to realize low-latency aggregate queries and also to avoid expensive data shuffling (from the cloud to an on-premise data center). We have validated CryptDICE in the context of a realistic industrial SaaS application and carried out an extensive functional validation, which shows the applicability of the middleware platform. In addition, our experimental evaluation efforts confirm that the performance overhead of CryptDICE is acceptable and validates the performance optimizations for achieving low-latency aggregate queries.


链接:

https://www.sciencedirect.com/science/article/abs/pii/S0306437920301289



02



第二篇.png


期刊:J. Cloud Comp.

影响因子(IF):2.788

发表时间:2020-11-26


摘 要


Currently, rapidly developing digital technological innovations affect and change the integrated information management processes of all sectors. The high efficiency of these innovations has inevitably pushed the health sector into a digital transformation process to optimize the technologies and methodologies used to optimize healthcare management systems. In this transformation, the Internet of Things (IoT) technology plays an important role, which enables many devices to connect and work together. IoT allows systems to work together using sensors, connection methods, internet protocols, databases, cloud computing, and analytic as infrastructure. In this respect, it is necessary to establish the necessary technical infrastructure and a suitable environment for the development of smart hospitals. This study points out the optimization factors, challenges, available technologies, and opportunities, as well as the system architecture that come about by employing IoT technology in smart hospital environments. In order to do that, the required technical infrastructure is divided into five layers and the system infrastructure, constraints, and methods needed in each layer are specified, which also includes the smart hospital’s dimensions and extent of intelligent computing and real-time big data analytic. As a result of the study, the deficiencies that may arise in each layer for the smart hospital design model and the factors that should be taken into account to eliminate them are explained. It is expected to provide a road map to managers, system developers, and researchers interested in optimization of the design of the smart hospital system.


链接:

https://journalofcloudcomputing.springeropen.com/articles/10.1186/s13677-020-00215-5




03


第三篇.png



期刊:Theor. Comput. Sci.

影响因子(IF): 0.747

发表时间:2020-10-15


摘 要


Recently, studying fundamental graph problems in the Massively Parallel Computation () framework, inspired by the MapReduce paradigm, has gained a lot of attention. An assumption common to a vast majority of approaches is to allow  memory per machine, where n is the number of nodes in the graph and  hides polylogarithmic factors. However, as pointed out by Karloff et al. [SODA'10] and Czumaj et al. [STOC'18], it might be unrealistic for a single machine to have linear or only slightly sublinear memory.


In this paper, we thus study a more practical variant of the  model which only requires substantially sublinear or even subpolynomial memory per machine. In contrast to the linear-memory  model and also to streaming algorithms, in this low-memory  setting, a single machine will only see a small number of nodes in the graph. We introduce a new and strikingly simple technique to cope with this imposed locality.


In particular, we show that the Maximal Independent Set () problem can be solved efficiently, that is, in  rounds, when the input graph is a tree. This constitutes an almost exponential speed-up over the low-memory  algorithm in  rounds in a concurrent work by Ghaffari and Uitto [SODA'19] and substantially reduces the local memory from  required by the recent -round  algorithm of Ghaffari et al. [PODC'18] to  for any , without incurring a significant loss in the round complexity. Moreover, it demonstrates how to make use of the all-to-all communication in the MPC model to almost exponentially improve on the corresponding bound in the  and  models by Lenzen and Wattenhofer [PODC'11].


【以下是专属计算机福利】

放论坛Q群改二维码.png


链接:

https://www.sciencedirect.com/science/article/pii/S0304397520305788



04

第四篇.png


期刊:J. Comput. Syst. Sci

影响因子(IF):1.494

发表时间:2020-11-19


摘 要


In k-Clustering we are given a multiset of n vectors X⊂Zd and a nonnegative number D, and we need to decide whether X can be partitioned into k clusters C1,…,Ck such that the cost

∑i=1kminci∈Rd⁡∑x∈Ci‖x−ci‖pp≤D,

where ‖⋅‖p is the Lp-norm. For p=2, k-Clustering is k-Means. We study k-Clustering from the perspective of parameterized complexity. The problem is known to be NP-hard for k=2 and also for d=2. It is a long-standing open question, whether the problem is fixed-parameter tractable (FPT) for the combined parameter d+k. In this paper, we focus on the parameterization by D. We complement the known negative results by showing that for p=0 and p=∞, k-Clustering is W[1]-hard when parameterized by D. Interestingly, we discover a tractability island of k-Clustering: for every p∈(0,1], k-Clustering is solvable in time 2O(Dlog⁡D)(nd)O(1).

链接:

https://www.sciencedirect.com/science/article/pii/S0022000020300982



05



第五篇.png


期刊:Int. J. Comput. Sci. Eng.

影响因子(IF):2.644

发表时间:2020-11-26

 

摘 要


The present paper develops a fast method to simulate the solidification structure of continuous billets with Cellular Automaton (CA) model. Traditional solution of the CA model on single CPU takes a long time for the massive datasets and complicated calculations, making it unrealistic to optimize the parameters through numerical simulation. In this paper, a parallel method based on Graphics Processing Units (GPU) was proposed to accelerate the calculation, which developed new algorithms for the solute redistribution and neighbor capture to avoid data race in parallel computing. This new method was applied to simulate the solidification structure of Fe-0.64C alloy, and the simulating results were in good agreement with the experiment results with the same parameters. The absolute computational time for the fast method implemented on Tesla P100 GPU is 277 s, while the traditional method implemented on Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40 GHz with single core is 24.57 h. The speedup, ratio between the absolute computational time of GPU-CA and CPU-CA, varies from 300 to 400 with the increase of the grids.

链接:

https://www.sciencedirect.com/science/article/abs/pii/S1877750320305639



新版尾.png


更多期刊内容推荐

第四期计算机领域SCI期刊经验汇总(内含审稿速度&投稿链接)!

中科院官宣!信息科学领域top10热点及新兴前沿公示!(惊喜福利大放送!)

拥有这份干货指南,让计算机网络学子们从容面试BAT大厂!




相关阅读
第二期!国防科技大学计算机学院硕博导师名单!
2020-09-17
17个机器学习的常用算法!
2021-01-14
再夺AI专业全球冠军!清华力压CMU夺得CSRankings高校AI领域第一
2021-01-29
粉丝独享 | B站、CSDN锦鲤就是你!
2021-01-22
13位华人入选,2020 ACM Fellows名单重磅公布!
2021-01-20
南方科技大学-计算智能与先进制造方向-博士-博士后-研究助理招聘
2021-01-21
附下载 |《计算机视觉中的数学方法》分享
2021-01-14
2020深度主动学习综述
2021-01-07
重磅!知名AI学者UCLA教授全职回国,拟加入清华大学
2020-09-16
问题来了:人脑植芯片!谁敢来试试?
2021-07-27
AI人工智能必读论文&经典书籍&百份行业报告一键全领取!
2020-09-17
【投稿必备】数篇计算机领域国内核心期刊汇总整理!
2020-09-21
量子计算机或能创造全新物质,有助揭示超导等物质特性
2022-06-15
清华大学王东老师【现代机器学习技术导论】586页pdf免费下载
2021-01-07
中科院文献情报中心期刊分区表最新发布!附查询方式&疑难解答!
2020-12-21
MSRA第五任院长周礼栋上任!出身复旦,加盟微软19年,是大规模分布式系统专家
2021-07-28
科研速递 | 花费15年众望所归!NumPy论文终登上Nature!
2020-09-21
试图消除人与机器的差距,AI能让计算机直观学习思考
2022-07-12
2020年度国家“杰出青年”名单公布,21人在计算机领域贡献突出
2020-08-17
科技期刊国际化数字出版平台SciOpen正式发布上线
2022-06-30